-->

AI Hallucinations and Training Your LLM Like a New Puppy

Generative AI hallucinations are real and cause for constant vigilance, but as media and AI strategist Andy Beach points out in this discussion with Ring Digital's Brian Ring at Streaming Media Connect, the large-language models that power Gen AI are only as good as their training, and they'll always try to give you what you want (just like a new puppy) rather than generating content that is accurate and ethically sound unless you train them properly in this early-days era of LLMs at work and in action.

How LLMs are like eager-to-please puppy dogs

Ring says to Beach, “You had a great analogy about the golden retriever. Tell us about how you think these hallucinations are and what we need to do to calm them down to get LLMs on air.”

“One of the things you have to remember about the way the LLMs are programmed is that they're effectively a puppy dog or a golden retriever,” Beach says. “They are eager to please you, so they are just trying to bring you the information they think you are asking for. And the way we ask questions often can lead us down a path where we are unintentionally having it feed us what we want to hear.”

The need for broad governance and training within organizations to manage AI implementation effectively

Beach also emphasizes that it takes broad governance to manage AI implementation effectively.

“Not only securely, not only ethically, but then how do you also help train the employees to use it and how to go implement it?” he says. “This is a learning moment for everybody and so we have to have these things in place to help with it. But there are simple things we can do, where we set rules or reminders in the LLMs that we're using in our personal lives, to remind them that we always want to fact check, we want a source, and it's important to make sure that the source is there. It's important to question the validity of what it's giving us at times.

Using “red team” and “blue team” agents to provide critical feedback and ensure diverse perspectives

Beach discusses taking an editorial approach to training AI using IT and cybersecurity terms.

“I use the IT nomenclature,” he says. “I have red team and blue team agents. I have ones there to help me, but then I have antagonistic editors. I have just an editor agent that's like, ‘Tear this apart.’ Tell me what's wrong with it. Tell me what's derivative. Tell me what I've said a thousand times that isn't landing and it's not there to please me. It's there to bring up the points that hopefully a real editor would be doing as part of what I'm doing so that I have another perspective because that is what we need to have in LLMs, but we're not getting that. We're getting the perspective that it thinks we want to have back as part of it.”

The ethical gray areas between AI-enhanced images and deep fakes

Beach points out that we are already in a very gray territory when it comes to something that is a deep fake or is just something that is an up-res or enhanced image.

“Zoom has literally made me look like a slightly better version, [it’s] made the dark circles under my eyes go away and made me look a little less puffy. That's already there. That's AI. That's not anything else other than that. Is that real? We just know it's there. It is a much larger step to go to, ‘Well, did that angle of the sports [shot] actually exist or was it something that was created whole cloth?' And that is where the data that's driving it is super important. That's where I think what the NBA is doing around with their Hawkeye system for getting such high levels of player data in place will help them have a track record that is sort of a Content Provenance and Authenticity (C2PA) compliant type breadcrumb trail to say, ‘This is what actually happened, [from] the data that we captured, regardless of whether the video was augmented or enhanced or touched at some point.' And I think the underlying data will continue to be the important part because that's the grounding source truth that we will have for all of this to work with LLMs.”

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Highlights First: How to Leverage Gen AI for Personalized Short-Form Sports Experiences

As a new generation of highlights-first fans moves into the sports fandom mainstream, sports broadcasters need the agility and tech-savviness to produce and monetize personalized, short-form sports content at scale that meets the experiential demands of millennial and Gen Z fans. Fortunately Gen AI is a game-changer for sports highlights and personalization, as Play Anywhere's Pete Scott and Ring Digital's Brian Ring discuss in this clip from Streaming Media Connect 2025.

Leveraging Gen AI to Improve Discovery and Streaming Engagement

The applications of Generative AI in streaming are seemingly endless, but what are specific ways that AI can make streaming content more discoverable, more personalized, more engaging, interactive, and more effective for advertisers in leveraging targeted content to reach the right customers? Microsoft's Andy Beach, Vecima's Paul Strickland, mireality's Maria Ingold, Alvarez & Marsal's Ethan Dreilinger, and Reality Software's Nadine Krefetz explore the possibilities in this clip from Streaming Media Connect 2024.

How to Deal with AI Hallucinating, Copyright, and Fact-Checking

How can streaming pros deal with all of the copyright and fact-checking pitfalls of using AI systems trained with public datasets as error-ridden and inappropriately expropriated as the internet itself? Boston 25 News' Ben Ratner, IntelliVid Research's Steve Vonder Haar, AugXLabs' Jeremy Toeman, and LiveX's Corey Behnke discuss how to navigate this minefield in this clip from Streaming Media Connect 2023.