-->
Save your FREE seat for Streaming Media Connect in February. Register Now!

A Broadcaster’s Cloud Migration Primer

Article Featured Image

This article offers a primer on how a traditional broadcaster could start to move services to a cloud provider. I spoke with an engineer from a government-owned European television station and subscription service who agreed to walk us through the process without direct attribution.

“We’ve created what you call the common cloud platform team that sets up the environment and related services to deploy a cloud instance,” says the engineer. This team makes sure there’s tooling to deploy new code and services. It is responsible for security and scaling, so not every single team has to do the same work to learn the most effective way to scale.

Working in a cloud environment requires a shift in how workflows are used and provisioned. Essentially, you’re setting up a CPU (and/or a GPU) and assigning work to it. A common mistake for someone transitioning to cloud services is forgetting to turn the service off after use. Service limits with a ceiling can prevent something that is badly configured from running up an unexpectedly large bill that shows up 30 days later.

“In cloud workflows, you have to be more explicit about permissions and firewall rules,” says the engineer. “It also requires you to understand your workloads, such as how many CPUs you are going to use to run something, how much storage you’ll need, and how many viewers you’re expecting to stream to.”

A request to move workflow to the cloud could stem from a number of requirements: to transition from CapEx to OpEx, to provide redundancy, to transition end-of-life on-prem applications, or to support a specific sporting event that’s expected to be very popular.

Cloud-Native Vendors

One of the inevitable questions that arise in cloud migration is build, buy, or both?

“There are a number of areas where, if you’re buying from external vendors, you have to do a lot of work yourself if you aren’t lucky enough that they’ve done the work for you. Some have absolutely, but it’s a minority at the moment,” says our engineer. “When we started, we looked at switching on vendors for some of our stream infrastructure that we wanted to run live. We were
replacing all our live apps in the live­ stream and wanted a vendor that supported multi-
period DASH. That turned out to be hard.”

The engineer outlined a couple of things to keep in mind when choosing vendors:

  • Make sure you find vendors that actually know the cloud and have written the application to be cloud-native (and we’re not even looking yet at multi-cloud).
  • Understand the performance of cloud virtual machines or cloud storage, and understand how it impacts the product and performance.

Cost isn’t really a barrier, the engineer says; it’s more of a change to thinking in a consumption-
based way: “The cloud is kind of nebulous to the C-suite. You’re not buying a physical thing, and some people tend to view that as a risk. The benefits of scale are what we’ve primarily focused on here. There are other benefits, like security and redundancy, that we can discuss in the future.”

How do you find the exact balance in which you provision enough services so you don’t come up short, but you also don’t spend too much money? Testing configurations should almost go without saying, but the engineer I spoke with notes that some organizations don’t recognize the need for testing and tend to proceed without it. “They expect things to work the same in the cloud as they did on-prem, and that’s not the case.”

Still, there are some deployments that won’t really benefit moving to the cloud. Something that is fairly steady and doesn’t have the need for peak use will most likely be more cost-effective on-prem.

At a recent Streaming Media Connect event, Andy Beach, CTO of media and entertainment at Microsoft, likened the difference between cloud and on-prem to “cattle and pets.” To paraphrase Beach’s analogy, cattle are part of the food chain, but you keep feeding pets. However, the challenge is that most people don’t figure out what their pets cost to keep.

If most typical broadcast media companies considering cloud migration were to figure out how much their on-prem systems cost and do a direct cost comparison, they would likely conclude that more services should run in the cloud. Odds are that those that do take the leap into cloud services won’t regret the decision from a financial perspective—at least until they forget to spin them down.

The engineer I spoke with discusses such a sporting event: “We wanted to scale to an audience we wouldn’t be able to reach with the on-premise infrastructure. Either we had to go and buy a lot more hardware, or we had to find another solution. That’s where the cloud came in. We’re likely to have somewhat similar peaks to other streaming services, but because we have live, it might not be the same sort of peak we see as somebody who has mainly VOD.”

A Few Workflows

“The first workloads we moved had the most peaks, which was the video platform,” the engineer recalls. “The video platform processes the metadata about what’s available. It contains information about what content is watched, what’s next in the series, and if a viewer wants to resume viewing. It also has all the ways of browsing content, like news, drama, and VOD.”

For the authentication and access management workflow, let’s go with a cloud-based SaaS provider. Next come the consumer commercial parts, like selecting the right streaming package. This means upgrading, downgrading, or any sort of partner integration in which you get entitlements from being a customer of a cable provider. The web front end is also cloud-based now.

“Most of our streaming originates in the cloud, with the exception of our live signals from on-
premise because that’s where the TV is produced,” our source says.

Processing a piece of content scales very easily. As the streaming services have matured, they’ve licensed larger content catalogs. The engineer notes, “We’ve licensed big catalogs, run on the cloud. Some of the day-to-day encoding runs in the basement data center. It’s doing the
same work essentially, but we haven’t updated the on-prem technology, so we’re looking to phase this out.

“One piece of content might take 45 minutes, but we can run hundreds of them at the same time. We also need to have storage allocated. If it was on-premise, we might be limited. In the cloud, we can do hundreds or thousands at a time if we wanted to.”

Scaling Costs

The engineer’s organization has a diverse set of offerings that includes scheduled broadcasts, VOD, and pop-up channels. “Many organizations haven’t had to estimate costs in the way you would in the cloud,” the engineer says. “This requires an estimate of how much programming is going to be done over the next year and how many viewers are expected to watch it. That’s not something broadcasters are used to, because programming just fits into a live channel. Planning for the content that you’re going to run is pretty much spot on, but how are you going to predict whether it will be 500,000 users or 550,000 users?”

Scale is more challenging to predict with some events than with others. “Some things we can predict, like we know a sports event is going to be popular,” the engineer says. “Some predictions are hard. You end up trying to make sure that you can scale under most circumstances, but that means you have to over-provision capacity, and that adds more cost. Or you can use auto-scaling groups, which programmatically scale capacity without user involvement. I don’t think cost estimating is particularly hard, as long as you accept that you’re going to be off by 10 or 15%.” Getting more exact numbers “is almost impossible.”

Advance provisioning is key to managing scale, the engineer explains. “We’ll start provisioning ahead of time, and if it’s something hitting our resource allocation, we catch it before it becomes a problem. Even if we get those settings right, it might not be the same settings that work well for others.”

Other things to plan for include the following:

  • Data transfer costs out
    of Amazon Web Services
  • How effectively your CDN caches

“I think it’s probably easier for me to understand the cost of the cloud because everything has a price,” the engineer says. “I can model my workloads and how much data transfer and how many CPUs I’m going to use.”

Other things to focus on:

  • What kind of CPU and GPU is
    needed for what you’re running
  • How much storage you need and
    how easily you need to access it
  • What happens when your
    service is turned off

“I don’t see people doing the same cost estimation for on-premise,” the engineer says. “Very rarely do you have an internal price for what it costs to actually run workloads on-premise. What usually happens is that people plan and buy X number of servers, put them in a rack, and then we have a bill for electricity over time. But that doesn’t tell you the cost of your workloads. That tells you the cost of the servers. You’re not actually pricing your workloads because your workloads take up only 10% of what you’re using there.”

Nadine Krefetz has a consulting background providing project and program management for many of the areas she writes about. She also does competitive analysis and technical marketing focused on the streaming industry. Half of her brain is unstructured data, and the other half is structured data. She can be reached at nadinek@realitysoftware.com or on LinkedIn.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

To Cloud, or Not to Cloud? A Live Streaming Producer's Dilemma

Maybe we've been looking at "the cloud" all wrong. It doesn't have to be some 3rd party place we push all our live streaming production elements to. Let's start looking at the production space right in front of us as "the cloud" and see what opportunities that opens up. What do you think?

Cloud Atlas: MovieLabs’ 2030 Vision Roadmap for Industry-Wide M & E Interoperability

MovieLabs' CTO Jim Helman discusses his experience creating standards for content classification for the Hollywood studios, as well as the MovieLabs ten-year strategy to bring interoperability to as many vendors as possible. The Entertainment Identifier Registry (EIDR) Helman helped create is a content ID which now holds more than 2.8 million records modernizing many aspects of production and operations. He'll also update us on the industry's response to the 2030 Vision strategy to bring interoperability to media creation technologies, requiring less custom engineering and better communications between software, infrastructure, and services.

Companies and Suppliers Mentioned