-->

How Sinclair's Tennis Channel International Puts AI Dubbing in Play for Sports Streaming

One of the first areas where Generative AI and AI/ML have made a visible, demonstrable, “on-air” impact on widely distributed streaming content is what is popularly known as “subbing and dubbing,” or content localization through subtitles and dubbing into other languages. During a panel at Streaming Media Connect in February 2025, Brian Ring of Ring Digital asked Sinclair’s Rafi Mamalian for a deep dive into the current opportunities, and challenges of AI-enabled subbing and dubbing, as well as areas where it can (and needs to) improve. Mamalian responded with insights based in large part on Sinclair’s experience localizing its Tennis Channel.

Join us at the next Streaming Media Connect, May 20-22, 2025!

Translating Tennis Channel International

“Since you've been doing this super-hardcore using AI to dub your stations in a different language, what’s your take on the state of play?” Ring asks. “What are the biggest challenges that you're seeing, and how have you gotten through them?”

“Last year, the Tennis Channel was launching internationally and we wanted to experiment with lip sync and translation into multiple different languages,” Mamalian says. “We launched Tennis Channel International in Germany, Austria, Switzerland, Spain, and India. And we wanted to see what those translations would look like and see if the lip syncing would work. And to a large part, it did,” he affirms.

But even with a major assist from AI, it wasn’t juego, set, partido.

“It’s not instant,” Mamalian cautions. "It still takes a lot of editing work to make sure the translations are accurate and they're timed correctly, because certain phrases will be longer in one language and shorter in another. That can create a lot of inconsistency with the cadence of the speaker." 

Isolating the Audio

“We tested it in a roundtable environment,” he recalls. “You have a Pardon the Interruption type of situation where everybody's talking over each other, and it has a lot of difficulty doing that. So one workaround we figured out with that was, currently everybody’s mic and all the audio is going into one feed. But then we figured out if we can just take the individual feeds and have them separately, then the AI will be able to pick that up separately and be able to identify the voices more clearly.”

This approach proved effective, Mamalian says, but also “time consuming” and expensive enough that it's only cost- and resource-effective for certain types of content. “So unless you have a really valuable asset that can be kind of evergreen, it’s not really a great option.”

Accommodating the Talent

Mamalian also notes that rights and clearance issues come into play when replacing the voices of on-air talent. Dubbing, he says, is “a trickier play, especially when you’re dealing with talent, to make sure that consent is cleared ahead of time to say that you're going to be doing this type of thing and why you're going to be doing it. You ultimately don’t want to make someone sound like something didn't want to sound. So it gets a little bit trickier there with pre-produced stuff. With talent that work for you, that’s a little bit easier.”

He goes on to note that with non-sports content like news, other types of personalities come into play, often on an ad-hoc, one-off basis, and this raises additional red flags when dubbing in new voices to localize the content.

“When you’re working with news, you obviously don't want to have somebody that you're interviewing or a celebrity or a public figure out there where you're manipulating their voice without their consent—especially for lip-syncing,” he says.

“Live translation is a little bit more flexible,” Mammalian says, “essentially doing what SAP has been doing. To that extent, it's ‘Alright, can you get the translation done? How quickly can you get it done?’” 

Testing Live Translation

As one might expect, the workflow is significantly different for live translation than it is for recorded content that is being repackaged and localized for subsequent international distribution. Noting that Sinclair is now “testing for live,” he offers some insight into how the process works and its current shortcomings.

From a high level, he explains, "We're ingesting the broadcast feed, running it through a translator, and then spitting it back out. There's a lot of different factors that go into there. Again, with the cadence matching the voices with the speakers, it doesn't always get it perfect. So for example, if I'm in the studio and then I cut over to a shot of me in the field, my voice is going to sound a little bit different to the AI itself, and then it's going to maybe think that I'm speaker number two or speaker number three and change it."

Contending with these challenges can be frustrating, both for producers and viewers since it contributes to streaming latency, but he says, "All that stuff is going to getting better over time. It's down to roughly a ten-second latency from live broadcast to digital distribution. That seems to be good enough for news."

With sports, he concedes, 10 seconds is "probably a little too long. But I would be more comfortable with having a shorter timeframe."

Join us at the next Streaming Media Connect, May 20-22, 2025!

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

ICYMI: Streaming Media Connect February 2025

Streaming Media presented its 16th Connect virtual conference February 25-27, featuring speakers from YouTube, Meta, Amazon, Roku, Akamai, Google Cloud, Plex, DAZN, Fremantle, A+E, Vevo, Philo, Tubi, The Trade Desk, Sinclair, Vizio, Revry, and more, and session topics ranging from live streaming delivery and FAST infrastructure and monetization to app and UX to programmatic vs. direct advertising to CTV and the OS wars.

AI and Streaming Media

This article explores the current state of AI in the streaming encoding, delivery, playback, and monetization ecosystems. By understanding the developments and considering key questions when evaluating AI-powered solutions, streaming professionals can make informed decisions about incorporating AI into their video processing pipelines and prepare for the future of AI-driven video technologies.