-->

Why Should Broadcasters Make the Switch to the Cloud?

Article Featured Image

There are generally two sides of the argument when it comes to broadcasting from the cloud. One side laughs and says, "This is a mission-critical service, and the cloud can't provide the same reach as traditional broadcast delivery." The other side says, "Bring it on. In fact, we're already there." 

We held a panel at Streaming Media East Connect in May with a few people who are already there: Gerry Field, VP of technology and distribution services at American Public Television (APT); Renard T. Jenkins, VP of con­tent transmission and production technology at WarnerMedia; Richard Oesterreicher, president and CEO of Streaming Global; and Shiva Paranandi, VP of technology operations and cloud architecture at Paramount+. If you're already there, this article may not interest you, but if you're starting to move your broadcast operations and delivery to the cloud, you've come to the right place. (Watch the full panel here.)

An advantage on the pro-cloud side is the ability to turn things on and scale quickly. The cons? Learning new skill sets and work approaches, dealing with security, and anticipating costs when almost every aspect of the workflow starts a meter running. 

Corporate culture is very instrumental in the cloud journey. "If you have a lot of technical debt, like a lot of the larger companies where you've had systems upon systems built over years, and then you throw in legacy processes that people really don't want to let go of, you are going to have a long journey," said Jenkins.

If you need to deliver something quickly, cloud resources offer an immediate way to add capabilities, said Oesterreicher. But he added a qualifier: "Digital delivery or IP-based delivery over a terrestrial network means costs go up as your viewer count goes up. … Having the flexibility to build out … for what your usual audience is versus what your peak audience is can have a significant impact on the overall quarterly results for a business."

Oesterreicher continued, "We enabled our customers to build out to their usual concurrent viewers and be able to have immediate on-demand overflow capability using resources that can spin up quicker. … We have customers seeing up to a 60% reduction in media delivery when comparing that to conventional IP-based delivery of content." 

Of course, your mileage may vary. For the transcoding, for instance, you pay for every bit that is transcoded. You have to be careful with what you're transcoding, and how much, and not just transcode blindly for every device out there. 

Moving to the cloud createes efficiencies and allows content owners to be a lot faster to market, says Renard T. Jenkins, VP of content transmission and production technology for HBO Max parent company WarnerMedia.

Standardized Formats

"One of the things that cloud-based solutions allow you to do is standardize at the beginning how you want to actually pull that content in and then be able to work throughout your entire process in a single format," said Jenkins. "In traditional broadcasts like live sports, we're acquiring in one format; every camera company has their own format. We're then pushing it into our trucks using SDI and then sending it back over satellite as ASI. From there, content either goes to post­production or to live, where the signal needs to be changed to the broadcast format for master control. Now we can actually start looking at some of the standards that are out there like Zixi, and we can do that from the beginning and then bring that into our process, and you don't have to change it. You can move that through editing and master control and use those types of protocols to streamline your process. Once you get into this space, it means a cost reduction because every time that you transcode or every time that you have to process something in the cloud, you get charged for it. It also creates efficiency in the way that you move your content and allows you to be a lot faster to market."

On Demand

If the days of moving hard drives have ended, much to the disappointment of couriers, which areas in your workflow should you start with? "A number of years ago, we moved our program submission, format, and process workflow to the cloud. It gives us the opportunity as a distributor to more quickly and efficiently be able to collect programming," said Field. 

This summer, most of the non-live and near-live programming that APT distributes will be moved to the cloud and will no longer be fed on satellite. The Corporation for Public Broadcasting funded a project to build a private multiprotocol label switching (MPLS) network as the backbone, with dedicated and managed bandwidth to all stations. The stations will go through an integration to upgrade traffic and automation systems. "[This is a] long-planned transition to a single cloud library for all stations. It eliminates redundant recording and storage for 356 stations," said Field. "Stations download files as needed. APT content is always available, not tied to a single satellite feed. Also, [there will be] no more rain fade refeeds, greatly reducing cost."

What if you're not quite to that stage yet? "You do have to approach it from a phased standpoint. One of the things that I think that's a determining factor in how long it's going to take you to do is assessing how much technical debt you have with your existing infrastructure," said Jenkins. "Look for the low-hanging fruit to find those things that you can easily move to the cloud. One of the first things that people think about is your archive, especially if you have an archive, a ‘near-chive,' and a deep archive. You move the archive first to your ‘near-chive,' [and] you make copies and you keep it local so that you can continue to work." 

"Obviously, the first questions are around financial implications," said Paranandi. "In a traditional data center, you use 100% of it, or you use 10% of it. You're just paying for it no matter what. In the cloud, that's not the case." Usage estimating becomes a very important part of the workload so you can scale on anticipated costs and needs. 

"The difference between capital expense and operational expense, that's really what a large part of this is. I mean the buy-in and the education that you need to do with the nontechnical staff in your company is probably just as important as anything else," said Field. "My CFO has got to understand what I'm talking about when I said that I'm going to do things differently and, hopefully, I'm going to bring in good news, and he's going to be happy—but I could also bring in very bad news if we don't do it right.

"I've used the example of how total cost of ownership for capital translates very easily to the total cost of operations," Field continued. "This is about the total number of dollars out of your pocket for doing all the things you want to do. We are very clearly in a hybrid workflow environment, and we are going to remain that way. There are some things that just don't make sense for us to do in the cloud. Our [quality control] is still very much an on-prem process. If we had to pay for that, it would add considerably to the cloud bills that we're paying."

American Public Television is moving most of its broadcast operations to the cloud, with one notable exception: quality control, which would be too expensive to move to the cloud, says Gerry Field, VP of technology and distribution services.

A Tale of Two Workflows

ViacomCBS, Paramount+'s parent company, has taken the first of two steps on its cloud journey. The entire streaming business, Para­mount+, and some CBS news and sports properties have already been delivered to the cloud. This was roughly a 3-year transition, including a time when the company was running its on-prem and cloud services in tandem as it moved fully to the cloud. "[T]hat intermediate state, when you're migrating from the data center to the cloud, is extremely important because you've got to keep both systems running," said Paranandi. "That's double the effort, so you have to make sure there's enough automation and processes so you don't double your staff, but you still can keep all your uptime going."

The broadcast workflow is the next to move, and it's in the works. "It's not there fully yet, but, hopefully, in the next few years, we plan to be completely cloud," said Paranandi. This move will certainly benefit from the details that Paranandi and all of those involved have already ironed out. In brief, he outlined their strategy. In the data center, there are certain levels of freedom because you own it. "I think the biggest philosophy of mine that needed to change is [that] it's a shared responsibility," he said. "Preparing for that architecturally is pretty important. With the cloud, you get the scale in relatively less time compared to the data center, but it also means your architecture has to be ready to be scaled to that. 

"Every few years, we have the Super Bowl, and that scale is insane," continued Paranandi. "Our normal usage is much lower than on the day of Super Bowl, as an example. So that level of training was extremely needed. This is not just for the people working in operations or quality. You need to make sure whatever you're building can easily scale, seamlessly. You don't want to build this complex interdependency of services, and then you're scrambling at the last minute to move things around."

Latency issues continue to be a problem. "One big thing that we do, from a purely architectural perspective, is edge-caching of a lot of the [video-on-demand] content," said Paranandi. "It helps reduce the latency in the cloud quite a bit. When it comes to live streaming, there's a lot of the network backbone that we have to pay attention to. Where the content is sourced from and how it is distributed is pretty relevant."

When Streaming Glo­bal first started talking to customers, many companies initially said that latency isn't that important to them, according to Oesterreicher. "Once there was an option for actually delivering low latency at scale and at a cost that could be accepted, latency became really important, really fast. I went through a little bit of a shock in how quickly the market shifted on that."

New Skill Sets

Scaling requires that software developers, analysts, and systems and workflow architects complement the traditional engineer's broadcast skill set. "I'm a strong believer that it's a lot easier for me to train someone internally who's already there than to go out and bring someone else in and then go through that whole process of getting them onboarded. … If someone is willing to learn, and they have historical knowledge, that's going to be a really valuable player for you," said Jenkins. 

The crossover in skills means moving to a different work approach. "We had to embrace this entire SRE DevOps mindset for the cloud, which wasn't there in the data center. You can do that at a data center, but the cloud was a forcing function for us to embrace that practice," said Paranandi. DevOps (or Google's version, called Site Reliability Engineering) is an approach that combines software development and IT operations into one role. While it's common in many engineering shops, it is a big step if you're not yet doing it. 

The move to the cloud requires significant shifts in mindset and architecture, says Shiva Paranandi, VP of technology operations and cloud architecture at Paramount+.

Each of the speakers on the panel was looking in the rearview mirror, at least to some degree, and each made the transition to the cloud sound easy, when of course it's anything but. However, all of these experts have come out the other side and are consistently able to scale and run their environments in the cloud.

"When you are training people to work within the cloud, one of the things that you have to do is make sure that you spin processes down. It's a fundamental thing, but it's one of those things that if you have not been working in this space, you sort of set a process, and you let it go. [In the cloud, this approach brings] a big bill at the end of the month," said Jenkins. "You find out that a process has been running constantly in the background. It's something as simple as that, that you have to really get people to focus on and that makes a very big difference because it's not set and forget, like we do with a lot of traditional broadcasts." 

All of the panelists are looking at machine learning and automation to help with cloud monitoring. "Monitoring the cloud is completely different from how you're doing it in a data center," said Paranandi. "You're still monitoring for uptime and the liability, but some of that cost is for monitoring the cloud itself, so we can keep our cloud providers honest. That is a big part of the job." 

Another important consideration is security. "You have to be aware of what is involved from an InfoSec perspective in the cloud, versus what was in the data center, which was a closed environment. In the cloud, you're in a shared tenant space, so having to secure the sources and everything was also a pretty big challenge for us," said Paranandi. "It's a different set of tools. It's a different set of processes and a different level of expertise on the cloud when it comes to InfoSec. So we had to almost, I wouldn't say rebuild, but retrain and build the security team for the cloud."

Unified Theory of Everything

"The advantage of working in the cloud is also being able to integrate our back-end business systems [main business management, financial, and program databases] as well and to be able to finally get it to the point where we're doing a whole lot less duplicate data entry," said Field. "We do a fair amount of work in the back end, so that we can have simple dashboards that are on people's desktops and can actually take something that was really sort of a very labor-intensive, physical transfer of content and just make it much more automated." 

APT's latest technology upgrade will also come with staffing requirements, according to Field. "For APT, there will be significant staff training for onboarding and operations, including engineering, traffic, programming, and management staffs." 

And one important lesson is that sometimes, old school is best. "We're actually doing cloud content over broadcast. Some of the efficiencies that you get from broadcasting content are just really hard to beat," said Field. "The content that we distribute eventually winds up on transmitters. It's a question of how it's getting there." APT provides content for public television stations, and while last-mile delivery is being done primarily over broadcast transmitters, that transmitter output is also being fed to a streaming service. 

"Broadcast, for now, remains a fixed linear schedule. ATSC 3.0 may change some of that," said Field. COVID's work-from-home lockdown requirement and the use of cloud services have meant an acceleration of the standard technology upgrade path. 

The cloud has redefined broadcasting, and now the broadcasting workflow has been merged with the streaming workflow. There's still a balancing act between the two, but it has opened up new opportunities to push out content as efficiently and cost effectively as possible. If you haven't yet begun the transition, the rewards may just make the switch worth it. 

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

To Cloud, or Not to Cloud? A Live Streaming Producer's Dilemma

Maybe we've been looking at "the cloud" all wrong. It doesn't have to be some 3rd party place we push all our live streaming production elements to. Let's start looking at the production space right in front of us as "the cloud" and see what opportunities that opens up. What do you think?

Cloud Atlas: MovieLabs’ 2030 Vision Roadmap for Industry-Wide M & E Interoperability

MovieLabs' CTO Jim Helman discusses his experience creating standards for content classification for the Hollywood studios, as well as the MovieLabs ten-year strategy to bring interoperability to as many vendors as possible. The Entertainment Identifier Registry (EIDR) Helman helped create is a content ID which now holds more than 2.8 million records modernizing many aspects of production and operations. He'll also update us on the industry's response to the 2030 Vision strategy to bring interoperability to media creation technologies, requiring less custom engineering and better communications between software, infrastructure, and services.

Every Day I'm Buffering: The Future of Streaming Movies Needs Edge Cloud Computing

How streaming services can achieve that cinematic experience through edge cloud solutions

Cloud QC and Monitoring Solutions Drive Superior Streaming Experiences

In today's video-streaming-centric world, cloud OTT monitoring solutions are vital to staying competitive and meeting the demand for high-quality video content on every screen.

Cloud Migration and the Future of Broadcasting

TAG Video Systems' Peter Wharton and Streaming Media's Eric Schumacher-Rasmussen discuss how OTA broadcasters should be following OTT providers to the cloud with "their origin and everything else they're doing" as soon as possible--if they're not among those already moving in that direction--in this clip from Streaming Media Connect 2021.

Why Streaming and VOD Providers are Moving to the Cloud

Help Me Stream's Tim Siglin and Harmonic's Rob Gambino discuss recently gathered data on the migration to cloud infrastructure across the streaming industry in this clip from Streaming Media Connect 2021.

Companies and Suppliers Mentioned