-->

How to Meet the Challenges of Cloud-Based Media Creation

Article Featured Image

Can filmmaking really move to the cloud? It’s one thing to stream content out to consumers; it’s a totally other thing to take raw and high-resolution files into a cloud workflow to replace on-prem processes. Movie­Labs has 10 principles (see Figure 1) in its Vision 2030 recommendations (see our November/ December 2023 cover story, Cloud Atlas: MovieLabs’ 2030 Vision Roadmap for Industry-Wide M & E Interoperability).

This article will take a look at requirements for color correction, asset storage, editing, transport, and interoperability. I’ll also cover the summarized MovieLabs principles:

  • Assets are ingested and stored in the cloud (public cloud, private cloud in data centers, or even on-prem) and do not need to be moved.
  • Applications go to assets (and not the other way around, which is more common today).
  • Workflows use common underlying data formats and metadata.
  • Media workflows are non-destructive and can be automated.
  • Security is based on a zero-trust model.

Figure 1: MovieLabs’ 10 principles of cloud migration, as outlined in its Vision 2030 initiative

Cloud Basics

Changing media creation workflows to meet MovieLabs’ 2030 Vision requires planning to include cloud-native applications that are designed and developed to take advantage of containerized architecture, microservices, and the inherent security and interoperability that each platform provides.

It’s the same argument everywhere else when it comes to cloud services, but the difference in movie production and post is the size of the content compared with any other industry. Overall, these are the common premises: Collaboration is easier with cloud applications; bursty and inconsistent requirements have scale capacity and cost savings; and there are various levels and costs for archive storage. Steady-state work often looks more attractive on-prem.

In the MovieLabs classification, cloud can be any combination of public cloud, private cloud in data centers, and private cloud on-prem. “Files are big in media,” says Mike Szumlinski, CPO of Backlight Creative and former founding partner at iconik before its acquisition by Backlight. “It’s almost always faster to bring the human time to the media than it is to bring the media to the humans. Transferring a full day’s shoot is either another full day of uploading or an overnight FedEx of a drive and then copying all that data for another couple hours. That’s lost time.”

Just as having a stunt double means the ability to capture a scene any number of times without the star getting hurt, proxies are the digital equivalent to getting complicated activities moving without waiting and without harm. A low-resolution proxy that is checked into the cloud saves on transfer time, and people can start making decisions about it before those really big files actually get to where they need to go.

“You don’t want to have to actually manipulate or move large files around if you don’t have to, and that’s one of the things that the 2030 Vision will help solve,” says Renard Jenkins, president of SMPTE and an industry advisor and analyst. “The first thing for me is, how do I get a low-res version of that that I can actually work with?”

Rough cuts and dailies breakdowns can all be done in lower resolutions that can be generated quickly on set or in a postproduction facility and then distributed very quickly to a wide area of people. This is one of the key recommendations from MovieLabs.

Is there a standard for proxies? In a word, no. When the remote editing application Cutting-Room (see Figure 2) was built, its creators focused on ensuring that their proxy was as responsive as possible both in speed of creation and frame accuracy. “An hour of video takes 7 seconds to create a proxy with us,” says Helge Høibraaten, CuttingRoom’s co-CEO and founder. “An hour of video with multiple layers of video, audio, and graphics takes about 5 or 6 minutes to render in full hi-res.”

Figure 2: Cloud-based video editor CuttingRoom

Color Correction by Proxy

Let’s start with cloud media creation’s hardest use case: color correction. “How can IMAX film cameras used to shoot movies like Oppenheimer use tools to go from film to digital dailies in a cloud environment and in an automated fashion?” asks Abdul Rehman, CPO of IMAX.

Rehman says that it’s critical to match the environment you are in physically when you are creating the content and to ensure that you have a practically similar look and feel of the content in a virtual environment. “A very common practice used is to create a proxy to reduce the data size and preserve the look of the content. Hopefully, the decisions that you make will translate the same way that you’re making based on the proxy. People still have concerns if the proxy really represents the right colors and the details the way the real content is supposed to look,” he notes.

When you make decisions and apply changes on a shot-by-shot basis, the back-and-forth work is too taxing, because it means downloading and uploading terabytes of data. “This becomes much more challenging with virtual production and use of cloud, because you don’t have the flexibility like in on-prem storage,” says Rehman.

“Having a file of that size and trying to manipulate it through the cloud in a semblance of real time, so that you can see the changes that you’re making in order to be able to make future changes, is difficult,” says Jenkins. “It’s still a little bit clunky, and color correction, much like music, has a rhythm to what the artist is doing.”

There have been advancements in the last 2–3 years that have increased computational power. “Does it create a more efficient way to do color correction? I would say slightly,” notes Jenkins.

“Tools have to get better every step of the way for this to become a much more used approach,” Rehman agrees. So color correction still has a way to go.

Asset Management

The easiest use case—and the one most people start with—is asset storage. There are two main principles for asset storage being put forth by MovieLabs’ Vision 2030: All content goes to the cloud first, and then applications go to the content.

“This is absolutely fundamental to how this process is going to work,” says Philippe Brodeur, CEO and founder of Overcast (see Figure 3). “The time that is saved by putting everything into a single repository
and then bringing the applications to that repository absolutely outweighs any sort of on-
premise setup.”

The economic reality of Hollywood is that stu­dios don’t really want to invest in capital, and so the conversation about CapEx to Op­­Ex takes over, says Reza Rassool, founder and chair of Kwaai (and former CTO of RealNetworks). “I recall a CBS executive saying, ‘I don’t want to be in the spinning-disc business anymore. I don’t want to have a room with 19-inch racks full of spinning discs, and as soon as I’ve purchased the system, the asset value just depreciates.’ ”

Why then is going to the cloud a challenge? Is it content size, technical capability, security? “Unfamiliarity is the main issue,” says Brodeur. “We’re working with a lot of distributed production companies who work for all the major U.S. broadcasters and studios, and they store content on Dropbox, on Google Drive, on all these prosumer solutions that everyone knows that they’re not supposed to be using for [production media] storage, but they’re doing it anyway. Why? Because they’re familiar with it.”

The next comment from customers is usually about costs. “That whole argument that the cloud is more expensive than storing on premise is a myth,” Brodeur says. “The amount of content that you can put in the cloud now and run different applications over it makes it far more economical than trying to do it on premise.”

“The customers that actually understand the storage they’ve purchased is surprisingly low in the cloud space,” says Szumlinski. “Nobody really understands anything around the security protocols, the speeds, how they work. They just want the cheapest thing that’s the fastest to upload and download. We get a lot of customers coming to us from Dropbox or Google Drive saying, ‘It’s great, but it doesn’t scale.’ The way they work fundamentally under the hood is everybody has to have a copy of the media. What happens if you have 300 terabytes of storage and you need to get 4 terabytes to somebody that only has a 1-terabyte internal hard drive?”

Figure 3: Overcast’s SaaS Cloud Video Hub

Editing

CuttingRoom is a professional, cloud-based, SaaS video-editing solution. “If you still have all of your content on-prem,” says Høibraaten, “there are multiple ways of connecting Cutting­Room to your storage.” Høibraaten points to MinIO, a software application that can be installed in a public, private, or hybrid cloud or on-prem as one viable solution for organizations preparing for cloud migration (see Figure 4).

“It turns your storage into S3-compatible storage, which means that any tool that can browse an S3 object store can now browse your store on-prem as well.

“And then we have other customers that are connecting AWS Outpost into their on-prem service, making a hybrid world between their AWS accounts and their on-prem service,” says Høibraaten. “How do you connect to the storage of content? It can’t be just in your own storage. It has to be able to connect elsewhere.”

Editing, as a genre, is very cloud-ready. CuttingRoom outputs edit decision lists (EDLs), which colorists can then import into a color-correction application. “The big thing here is taking away all of this hassle from the creatives who have had to learn enormous amounts of logistics about how to move about files, trans­code them for efficient editing, etc.,” according to Høibraaten.

Figure 4: Public, private, and hybrid cloud options for MinIO

Compute Power

Many postproduction applications—especially in the realm of 2D and 3D effects—require massive compute power (a combination of system-level and dedicated graphics processing capability). It’s very difficult to have that compute power available on-prem for the same reason as the storage: It’s a capital asset that’s not being used all of the time. Cloud computing offers elastic compute capability, so it can grow and stretch depending on what’s needed and so it can accommodate the frequent spikes in processing demand that you get in post­production. Investing in that sort of compute capability in the cloud “doesn’t necessarily mean that you won’t have powerful local computers,” Rassool says, “but I think the bulk of the compute is going to be in the cloud.”

Compute uses can be as complicated as color correction or special effect creation or as simple as democratizing who has access to content.
“We have a client who gets charged anywhere between $10,000 and $25,000 to have an archive pulled by their ad agency,” says Brodeur. “It takes days, if not weeks, to get the content. You’ve got all of these people running around looking for an archive when, literally, you should just be able to search.” And anyone who has access rights should be able to perform these searches.

Collaboration

COVID pushed online collaboration into every industry, and moviemaking is no exception. Backlight Creative has several products geared toward collaborative production. cine­Sync is a shared high-resolution review and collaboration tool, and iconik provides asset management (see Figure 5). The cloud is doing all the orchestration; the actual files themselves are local because they have to be for that level of resolution,” says Szumlinski. “You couldn’t do it in a browser because the formats might not be supported or the color resolution accuracy wouldn’t be the same.” Staff from anywhere can collaborate or access the asset management resources.

“Once choices are made, you can start to do your conforms in the place where the high-res media does need to live, which oftentimes is on-prem because, realistically for most customers, that’s the most cost-effective way to go,” says Szumlinski.

Figure 5: Backlight Creative’s cineSync 5.3 and iconik are integrated to enable postproduction collaboration and shared asset management

Network

A well-designed network means individuals can look at something in real time, make comments, and interact with people who may not be anywhere near where they are for dailies. You’re going to need a fat pipe, and those pipes have to be provisioned in advance, says Jenkins.

Here’s a recent scenario from one New York City-based video editor: Video gets downloaded from the camera, then copied to shuttle drives. Those drives go for prep (in this case to Company 3) to create the dailies. These get uploaded via Aspera to a server in Los Angeles, and the editor logs into their PC edit system the next morning, where the files are available in the Los Angeles-based production cloud.

“You’re trusting that people’s internet is fast enough. When you’re editing, you’re jumping around a lot,” says freelance movie editor Max Blecker. “It’s important for it to be responsive and not have to wait every time you do something. Doing it purely in the cloud (as of today) is going to have lag unless you have really fast internet.”

GlobalM has traditionally been in the contribution and point-to-multipoint distribution bus­iness, sending out high-availability broadcasts (sporting events, news coverage, etc.) for distribution (see Figure 6). The level of strength and redundancy that GlobalM provides supports a production facility’s ability to use cloud services. The company positions itself as both a replacement for and complement to satellite and private fiber networks. “Our technology can either complement existing networks or replace them,” says Paul Calleja, CEO of GlobalM. “We can create re­dundancy switching that allows us to provide a higher service-level agreement than satellite or even some fiber networks using the cloud. We can have two paths like you would traditionally have on a fiber network that can fail over.”

This is the type of robustness you want powering the connection to running daily rushes in the cloud. “The real interesting part is being able to do UHD HDR 120-megabit signal to 50 rightsholders around the world without touching a satellite or a private fiber network,” says
Calleja. “A production could be on location anywhere using a Starlink antenna, and you want to get these rushes back to the studio in Los Angeles. You could have a 4K HDR stream going from location to location, and it will feel like real time with the latency figures of a real-
time phone call.” GlobalM also provides “return channel” capabilities that allow for interaction and collaboration, Calleja adds.

Specifically architecting for this type of distribution is a very special skill. It means building the networks, monitoring them, maintaining uptime, and keeping them functioning at the optimum rate that premium productions demand.

“With the exception of banking or healthcare, production is probably the most demanding” task a network can support, says Jenkins. “When you talk about networks, it’s performance that you should be building toward,” he adds. “That allows you to be able to predict how long it’s going to take you to actually deal with all of this material.”

Figure 6: A schematic showing how GlobalM’s point-to-multipoint distribution works

Automation Capabilities

An API-first methodology means that pretty much everything that you can do in a cloud production tool, you can also automate, which is another major MovieLabs recommendation. Once there can be communications between systems, they should be able to do things programmatically.

In addition to using cloud-native applications and having assets in the cloud with well-designed networks, the next important part is ensuring that work can travel easily from one application to the next. This is done via APIs designed for communicating information or data in real time. This communication path is crucial for application handoff from one process to another. “The idea is that you’re looking at cloud workflows as a complete ecosystem and not simply one problem,” says Jenkins.

APIs can come in two flavors. They are either the main intelligence for a product or an add-on to it. “Why that matters,” Szumlinski says, “is because if it’s a feature of the product, it may not be full-featured. If it’s the fundamental tool that the product is built on, that means that there’s an inherent stability because your own product doesn’t work if the API doesn’t work. APIs are essential to being able to integrate with anything and everything.”

“We saw early on that we needed to create an architecture where integration points between us and different services are very thin,” says Høibraaten. “We have advanced integration toolkits that have thin specialized layers for the different services that we’re connected to.”

“To create workflows programmatically in a dynamic fashion puts an additional level of requirement in terms of readiness of these tools,” says Rehman. This includes making them more secure with less human involvement.

Interoperability

One of the key challenges of cloud migration in the media creation world, according to Rehman, is “data transfer and getting access to data. You have to make sure that the content is secure but also accessible in a very uniform fashion across different tools. That means interoperability becomes a bit more challenging.”

Rehman points to QC as an example. “A number of QC tools can only be deployed on Windows. Windows has issues with S3, so you need a tool that can help you access S3,” he says. “If one tool is on one cloud and the other tool is on the other cloud platform, right off the bat you have data transfer issues. The second thing,” he continues, “is that tools need to deploy and scale within a specific zone or a specific VPC [virtual private cloud]. Because this is highly valuable content, I would be surprised if somebody is doing it in public. The set of tools that you need to use then need to support a private cluster-based deployment.”

That’s where the interoperability challenges begin. “Ideally, for your tool and my tool to work together, if S3 is the way to go, which is the standard storage on AWS, then both of us should be able to read to S3,” Rehman explains. “If you’re reading from it, then I need write access. I think all of those things are not set up right now in a very seamless fashion.”

Effective cloud platforms are the glue that holds these workflows together when it comes to operability, Rehman says. “For example, if there’s one data transfer protocol that is being used to move from one VPC to another, you need to be able to read using that data protocol. If different applications are in that set, then for them to be interoperable, you need to be able to support that. It may not be that one is more secure than the other one, it’s just that you need to support it to be able to provide interoperability.”

Metadata

If an application supports an open taxonomy or has a good API, metadata will travel between applications. Making sure the metadata flows in and out of various storage and applications is key. “Trying to standardize metadata in different data centers is very, very difficult. But if you’re doing it through APIs, it makes a very different experience,” says Brodeur.

“We support several sidecar formats. So even if you move content to your cloud storage and you’re using tools that we don’t know anything about and we don’t integrate with, we have delivered metadata on the sidecars that most tools out there are able to read,” says Høibraaten. “It’s a very important thing that you can take your metadata with you.”

For asset management, Szumlinski says, “iconik does a great job of storing metadata around media.” On the task management side, “ftrack can use the same taxonomy between different applications.”

These principles are what MovieLabs was thinking about for its Vision 2030 goals. While these two products come from the same company and would be expected to work this way, the next phase is to have many more (if not all) applications share data seamlessly.

Cloud Migration’s Many Moving Parts

There are many moving parts here. There’s picking the right applications. Having a robust network to connect to cloud services. Supporting interoperability (and metadata) between applications. Ensuring all security requirements are met. Estimating load, and budgeting this new environment.

Having trained technical staff is also critical to cloud migration. You need operations people who are experts in setting up, monitoring,
and managing these workflows. Not only does each application need to do its discrete job, but security also needs to be maintained, including an audit trail of who accessed what when. Your operations staff members “have to be experts in cloud technologies, including transfer, security, and access scalability cost estimates,” says Rehman.

Most important of all, Rehman argues, is the fundamental process of successfully moving your media assets and operations into the cloud. “If that falls, then everything else is just fluff.”

Nadine Krefetz has a consulting background providing project and program management for many of the areas she writes about. She also does competitive analysis and technical marketing focused on the streaming industry. Half of her brain is unstructured data, and the other half is structured data. She can be reached at nadinek@realitysoftware.com or on LinkedIn.

Comments? Email us at letters@streamingmedia.com, or check the masthead for other ways to contact us.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

To Cloud, or Not to Cloud? A Live Streaming Producer's Dilemma

Maybe we've been looking at "the cloud" all wrong. It doesn't have to be some 3rd party place we push all our live streaming production elements to. Let's start looking at the production space right in front of us as "the cloud" and see what opportunities that opens up. What do you think?

Companies and Suppliers Mentioned