-->
Save your FREE seat for Streaming Media Connect in February. Register Now!

Q&A: Google Cloud's Anil Jain Explains How Generative AI Is Driving Digital Transformation for M&E and DTC

Article Featured Image

For Anil Jain, leading the Strategic Consumer Industries team for Google Cloud has meant helping traditional media companies embrace cloud and AI, move to an OpEx-centric business model, and adapt to a more development-oriented mindset in engineering and product management. In this interview, Jain discusses the global shift to cloud-based operations in the media industry, the ways generative AI is disrupting everything from production to packaging to all aspects of the user experience, and what media companies should be afraid of if they’re not already.

Nadine Krefetz: What is your role at Google Cloud?

Anil Jain: My remit is global managing director for strategic consumer industries: media and entertainment, games, retail, consumer packaged goods, and telcos. In the 6 years I’ve been here, we’ve helped each of these industries drive digital transformation. For the past year, the AI part has been bolded, double underscored, and then put in big neon lights. Generative AI has engendered this realization that you can accelerate that journey toward transformation.

Krefetz: What are you doing with telcos?

Jain: In many parts of the world, telcos or communication service providers have significant media and entertainment interests as well. Because the communication service providers had the infrastructure, they added media services, television, and then streaming because it was a way to add additional value for their consumers. But it was also a way to increase their ARPU [average revenue per user]. Everything we talk about in media—especially from a direct-to-consumer streaming perspective—is relevant to telcos.

Krefetz: What variation exists in your media segment?

Jain: If you look in media, you’ll see a range of types of companies: large media, studios, born-digital streamers, sports leagues, publishers, and new media digital platforms, like the TikToks and the Snaps of the world. Fundamentally, they are all competing for the same finite attention of consumers.

The last decade or so has moved media toward fully embracing the importance of cloud, first party, and operational data and being able to use that to meet the needs of personalization and monetization. The direct-to- consumer paradigm disrupted the entire value chain. They’ve all recognized it now. But have they been able to capitalize on it and move quickly? For cloud readiness, I would give them a B-minus. 

Krefetz: What are the challenges facing media companies undertaking cloud migration?

Jain: You can categorize challenges in a few different things. There’s the financial and economic reality. A lot of large, established media organizations that have been around for 50, 70, or 100 years find it very hard to pivot and change the nature of how their business operates. Their business model and their financials are built around that CapEx structure.

If you shift to an OpEx model, your operating costs are higher because you don’t have this depreciation model for CapEx. That creates tension between the CFO and the CTO, and that tension becomes a big challenge. You have to prove the revenue, and you can’t.

Then there are the consumer expectations. There’s a whole ecosystem that is making money and delivering value to consumers. They’ve got a whole new generation of customers with expectations that are driving on-demand and new forms of content and monetization in the format and cost model they want.

That dichotomy makes it hard because you need to invest in the second thing and leverage whatever profit you can get out of the first thing. It’s almost like running two different bus­inesses simultaneously.

Krefetz: What are the economics of taking a build vs. buy approach? 

Jain: Five or 6 years ago, [TMT analyst] Rich Green­field and his team said every media company needs to go all in on technology and streaming. They have to own it. It has to be core to who they are—and it’s really expensive to do. Now we’re seeing media companies say, “Which parts of this do I need to own? Which parts can I outsource? How can I reduce the cost equation?” Because even in streaming, most of the large media companies aren’t profitable in that business model. Those are the biggest challenges.

I think you can talk about a bunch of other challenges like workforce development and skilling, the shift from the traditional broadcast engineer to more of a developer mindset and digital product manager. But I think that’s doable, and I see it happening around the world in many functions. I think the bigger issue is the economic one. 

Krefetz: How does generative AI impact the hardware refresh cycle?

Jain: Gen AI is relevant to everything, but I think they’re separate. A lot of media companies and a lot of telcos are asking, “How do I not have to do this big cash outlay again?” They’re thinking that maybe this is the time to shift.

The ecosystem of technology partners and independent software vendors (ISVs) that power the media supply chains of the world are not all cloud native—yet. In some cases, it’s been even harder for them because many of them are sub-scale. Generally, they’re not billion-dollar businesses; they’re a few-hundred-million- dollar businesses. For them to maintain their existing revenue stream while simultaneously moving their road map and their future products to cloud-native ones has been a very difficult transition.

In 2019 at the Devoncroft Executive Summit, I said that the only way this works is if the hyper- scalers, the media companies, and the ISVs work together. That’s the journey we’ve been on. That’s where AI comes in, because once you have this cloud-native software solution, you can actually start leveraging AI because that data is part and parcel of how you operate.

A media company can now orchestrate across different applications and vendors with that data. Are people doing it very well yet and creating automation that allows them to meet the changing demands of consumers? I think we’re still early days on that journey. 

Krefetz: What will help move more services and products to the cloud?

Jain: Part of what we’ve done is work very closely with a number of application vendors to help them migrate their services to partner so that they can actually take advantage of the underlying capabilities we have. Our engineers and their engineers are working to align and to evolve their product set.

I don’t know that all of them will survive. When you go through these massive technology disruptions, it also opens the door to new vendors coming in. We’re seeing that actually happen where there are more nimble kinds of cloud-native companies and cloud-native applications to do media asset management or to do orchestration or to provide operational data to drive automation. All of the direct-to-consumer streaming stack has been very cloud-oriented for quite some time now.

Krefetz: What about gaming? 

Jain: If I were the CEO of a traditional media company, what would be keeping me up at night are social media and games, because they’re both changing the nature of how digital experiences are consumed.

The games business is definitely very digital tech-forward. You’ve got PC games, console games, and mobile games. The games that have the largest share of player time and the largest revenue are typically what we call “live service” games—persistent games that constantly interact with the server. A lot of these folks had their own data centers or co-lo facilities. The games go on for years, and during that time, you can’t switch from the clusters you have running in various parts of the world to modernize that stack.

The [gaming companies] are very much at the head of the pack in terms of being more mature from a cloud perspective, but even they have growth opportunities in leveraging cloud-native capabilities in public cloud infrastructure and data analytics for operational efficiency and for personalization. 

Krefetz: How are games and social impacting media?

Jain: I always use my household as an example. Catching up on news or social events means we all have information from different sources. My 77-year-old mother watches news on a streaming service, on demand, whatever she wants. My wife and I mostly use push notification. My 19-year-old daughter gets most of her information from TikTok and Instagram. For my son, who’s about to turn 14, it’s YouTube or a Discord server.

How do you reach all of these people if you’re a media company? We have 5.4 billion people online today, and from 2017 to 2023, the number of people on social media doubled. The only way you can prepare for this is by making sure that every system you’re using, from content creation and production, licensing, all the way through distribution management and distribution, is flexible and agile and responsive.

The other thing you have to do is establish a very strong foundation of being data-driven. If I’m developing content, whether I’m a publisher or a broadcaster, I need to make sure that the content, the creative that I develop, can be repackaged in some way so that I can meet my consumers.

Krefetz: How is generative AI going to help with this? 

Jain: Generative AI has only come into everyone’s mind in the last 2 years, but AI has been applied to do things like recommendations for content discovery and personalization for quite some time.

What generative AI enables is creating new content, such as in the form of a chatbot that enables me to interact with an AI agent to help me do my job or to accelerate content creation and production. It also helps you compress timelines; even the really interesting creative work we do also involves hours and hours of mundane tedious work that AI can do for us. We’re seeing that happen already with tools in the creation, production, and postproduction process.

But there’s also the notion of applying generative AI to repackage content. Let’s say I’m a publisher, and my goal is for you to come to my site and engage with high-value content that I have an editorial and journalistic team creating, but you’re time-constrained. Imagine you can take a 7-minute read and be able to click a button to say, “I don’t have time to read this, so give it to me as a podcast.” Or you might say, “I’m a visual learner; turn it into an animated video that’s 3 minutes long.” Instead of requiring humans to go through that process, AI agents can take the information, understand what’s in it, and repackage it in a way that allows for greater reach and greater consumption.

Those are just a couple of examples of how I think generative AI is going to help address these dynamic changes in consumer expectations that are huge forces acting on the media industry today and on other industries as well. 

Krefetz: Can you point to anyone specific that you think is doing a good job?

Jain: Forbes is a customer of ours. Like every publisher, they have a search bar on their site, but traditionally, it hasn’t been very effective. Forbes has a tremendous amount of content, and they wanted to improve consumer experience. They used Google’s Vertex AI Search and Conversation to implement Adelaide, which is their new search engine.

Adelaide is essentially a natural-language chat interface where you can ask a question however you want. You don’t have to think in terms of search keywords. Adelaide gives you a gen AI-generated summary. Then, below, it shows you all of the source articles within Forbes that the information came from. It has created much deeper, higher-quality engagement and extended duration on-site and within the content.

It took Forbes less than 2 weeks to implement the core aspect of the search and conversational AI. The launch took 2 months, because, of course, you have to make UI changes and other updates within the publisher stack.

Krefetz: Do you have other video-specific examples of gen AI’s impact? 

Jain: I think everyone is excited about video. It’s what we call multimodal generative AI—text, image, video, and audio; both being able to input content in those formats as well as generate it. We released the Vertex AI Gemini model to address multimodal.

Everyone is wondering, “Can I use multi­modal generative AI to do dynamic ad crea­tive?” One of our customers, Carrefour, is one of the largest retailers in the world. They’ve built their own marketing studio to be able to create ads so that each viewer can get a personalized ad. This is great because you can do that in automation and at scale, taking data signals in and then running an automated generative AI process to create campaigns that are very target­ed to audience segments in media. That’s one area that agencies and media companies are experimenting with right now.

A company called Cineverse has an AI agent called cineSearch designed for implementation in a direct-to-consumer streaming service with a conversational interface. You can say, “I’m looking for the film where that guy was riding a black stallion into a sunset,” and you’ll be able to surface that using their agent. This means you can discover the content that you’re looking for without having to do a very kind of non- intuitive search interaction or a lot of scrolling. You can even input the type of thing you’re interested in [without knowing specifics], like “a slapstick comedy from the ’70s,” and get results.

Krefetz: Are chat and search going to be the easiest entry points for people to make use of AI? 

Jain: Those are probably the most obvious entry points. People can actually see how a conversational interface changes the dynamics of the user experience. Our Vertex AI Search and Conversation is an easy way to get started. I think you’re going to see that the conversational interface that is generative AI-powered will actually be the thing that takes off fastest across industries.

Another prediction: Providers of commerce, tech stacks, and commerce platforms will introduce these capabilities and allow you as a direct-to-consumer streaming platform to integrate it into what you’re doing. This makes it a lot easier for a media company to activate that capability and strike partnerships with retailers that are relevant to the content that you’re providing.

I think every major company out there is experimenting today. We’ve talked to customers who say they have 1,500 use cases that they’ve identified within their organization. 

Krefetz: How do they pick the right ones?

Jain: [They do so by asking] which are the ones that are going to give them the best ROI, and how do they take them from pilot and experiment to production implementation. This is not a question of what is your AI strategy, but rather, how AI can help you achieve your strategy.

I think it’s fair to say that generative AI may be overhyped in the short term, but underhyped in the long term. You need to tie the gen AI ideas back to the key strategic imperatives for your business—growth of the business, continuously improving customer experience, driving different ways of reaching consumers, and optimizing monetization.

Krefetz: What capabilities or skills will be needed? 

Jain: They need to just get started. Engage with different generative AI experiences. Do some workshops to articulate all of the possible use cases, and then go through a process of evaluating which use cases best support the business imperatives.

Next, they need to develop a matrix of how long it takes to implement and launch. What is the level of effort and resources required? What is the measurable business value? What’s the ongoing cost to operate? These are all things that can be tested because you can implement a prototype, and you test and see what it would look like. And then you have to imagine it at scale. This is not the purview of one group within a company; it’s actually a com­panywide function. 

Krefetz: Is there data supporting ROI?

Jain: I wish there were more. These are early days. Recently, I was talking with a telco about one of the use cases they had, and they asked, “Are these tools available?” I said, “They’re probably being developed right now. We provide the infrastructure of the platform. Then you can build on top of that. We can do it with you, or our partners can do it with you.”

Even the independent software companies that build the tools to do a lot of these capabilities are all in the same place, because this technology has just shown up in such a viable form in the last year. 

Krefetz: If specific commercial products are not available in the market, what then?

Jain: There’s a tendency to ask, “If these tools aren’t available, should I wait a couple of years until they’re ready and they’re robust?” In the past, in certain scenarios, the right approach might have been to wait and see. But right now, everybody needs to be engaged. This is a viable option because you can implement and test and prototype and do proofs of concept with gen AI really fast.

This does not mean sinking a whole lot of money into developing a new capability. You can use the platforms and the models that are out there and very quickly build conversational AI interfaces or search capabilities. You can get started very quickly and see the efficacy and value today and not have to wait for someone to show up with a packaged product to offer you. 

Krefetz: It seems like a number of companies have announced products integrating retail into streaming.

Jain: The integration of merchandising and commerce into a stream has been a dream for quite some time, like 20 or 30 years. The obvious place that everyone goes first is sports because there’s a lot of merchandise. Fans that are watching have a desire to do transactions and get stuff, so how do we unlock that value? I think we’re going to see a lot more, “While I’m watching this movie, can I buy that sweater or those shoes that I see on screen?” This is also coming now with the ability to do object detection.

The issue is, how do you do that in a way so that you’re not taking the viewer away from the watching experience? Now you’ll see a lot more of these experiences where you click and then there’s an overlay, or it changes the viewing screen so you don’t lose the game that you’re watching, but you can transact over here, or you can engage with a QR code and get your second-screen transaction going on your phone.

There are a multitude of different ways that that can happen. You can transact in the app, or you can be sent the link to transact. Do the content owner and streaming provider actually provide the full transaction in the experience? Or are they integrating through an API using someone else’s store capability, so they don’t have to worry about that infrastructure and managing the [personally identifiable information] and transaction data? I think generative AI is going to really facilitate this and make it easier to make those things happen. 

Krefetz: How exactly will AI facilitate that?

Jain: One of the biggest things that’s going to change retail is conversational commerce. I can now interact with an AI agent that helps me find the product that I want, and I can do so in a very natural way, so that I’m not distracted by some search experience. It’s almost like I have a concierge who can help me find the thing I’m looking for, and maybe based on what they know about me, the content I’m watching, or the environment I’m in, can actually suggest more relevant products.

I think the concern that has always existed in media experiences that are digital is, “I don’t want to distract the consumer from the entertainment experience or the content they’re consuming. I don’t want to move them away from my experience to someone else’s experience. I don’t want to manage the risk associated with a negative experience that comes because the transaction didn’t work or because they got charged for something that they didn’t intend to be charged for, and now it’s associated with my brand.” As the IP owner and the content provider, I think about all of those things. Those risks go down as the consumer experience gets much better, which I think it will. 

Krefetz: Can we think of a large language model (LLM) as the skill to understand, communicate, and make inferences, but the corporate data is private to the enterprise?

Jain: There’s a lot of confusion around this. The LLM is going to allow a use case to have that conversational or search interface. This is the body of data against which you are trying to get answers. That is the body of data that you are grounding the search on and saying, “Use the large language model to understand what I’m saying and translate it to intent, and then in this constrained body of information that’s private and proprietary, find the answer, then turn around and communicate that back through the large language model in a way that’s understandable.”

That data belongs to the enterprise. In our platform, it doesn’t actually leak into or get used by the large language models that are frozen. They’re not being trained on your data. Our platform is designed to be enterprise-ready so that your data doesn’t go anywhere else. 

Krefetz: Should companies build an LLM?

Jain: Most companies probably shouldn’t be thinking about making their own large language models or foundation models, but they should be thinking about how they can find the right model for the right use case at the right cost in a very brand-safe, data-safe, privacy-compliant, and secure way.

Krefetz: This goes back to how some companies see social as a competitor to media in general.

Jain: I think what some social platforms have done is create an AI agent that is an assistant. Traditional media companies need to be thinking about the generation of users that are spending more time on social media and how to deliver content to those platforms and use it almost like a funnel to get at these users.

I think AI can certainly play a role. Now we’re in the space of ideas, because no one’s actually implemented this that I’ve seen. But can I take content that I own and have AI create repackaged short-form content at scale that’s driven to engage users on social platforms?

What this makes possible is personalization at scale, where I can’t take a bunch of creative people and say, “Give me a 15-second production that really emphasizes the thriller aspect of this drama.” That’s a very large generalization. But imagine if I could do it in a way that was much more nuanced based on what the data signals tell me will resonate with a particular audience. That is just one example of how I think media companies need to be thinking about AI and content, and, in fact, brands and retailers are already thinking about it exactly that way as well.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

The Generative AI Industrial Revolution

Generative AI may prove even more divisive to society than the Indus­trial Revolution. There are many questions well worth examining re­garding gen AI's impact on the streaming in­dustry, the media and entertainment world, and more. In this article, I'm going to focus on a few streaming use cases in the hope of pro­viding a broader perspective on the changes happening in our world now and those we can expect going forward.

Q&A: Simon Crownshaw, Microsoft’s Worldwide Media and Entertainment Strategy Director, Talks Gen AI

In this expansive interview with Simon Crownshaw, Microsoft's worldwide media and entertainment strategy director, we discuss how Microsoft customers are leveraging generative AI in all stages of the streaming workflow and how they're using it in content delivery and to enhance user experiences in a range of use cases. Crownshaw also digs deep into how Microsoft is building asset management architecture and the critical role metadata plays in effective large-language models (LLMs), maximizing the value of available data.

Generative AI and the Future of Media Tech

Incorporating generative AI into our workflows has the potential to impact almost everything in media technologies, and I'll examine several of the possibilities in this article.

SMW 17: Brightcove's Anil Jain Talks Securing Live-Streamed Content

Streaming Media's Tim Siglin interviews Brightcove EVP & GM, Media Anil Jain at Streaming Media West 2017.

Google Cloud Goes After Media & Entertainment Customers

Google Cloud is now responsible for 20-25% of all internet traffic and powers Snapchat, engineering director Leonidas Kontothanassis tells Content Delivery Summit attendees,

Companies and Suppliers Mentioned