Best Practices for Premium Video Streaming, Part 1
Reliance on the internet for access to premium video content has made streaming services central to media and entertainment businesses worldwide. Thanks to a growth rate ten times that of traditional TV, OTT video already accounts for 15 percent of total industry revenues and is projected to make up one-third of the market by 2022, according to Digital TV Research.
Whatever the use case might be, online video service providers share the need to ensure maximum performance at the scale their business requires. But true differentiation in quality and consistency can only be achieved through adherence to best practices in all phases of distribution.
This is the first in a series of articles that will take a deep dive into these best practices and offer guidance for maximizing performance for online video streaming—from points where source video is captured all the way to end user devices. Let's explore phase one: managing the first mile.
Starting at the Front End of the Distribution Chain
Every step taken from post-production of an asset or from linear channel playout to points of ingestion onto origin servers must ensure that those servers will be able to propagate content to the next points in the chain—and at the quality levels producers expect. As OTT distribution of linear content becomes central to content providers' and their distributors' monetization goals, the stakes are higher than ever.
These new business models require that any linear stream be accessed on any display at the low-latency and high quality levels of traditional TV, while OTT content accessed for on-demand viewing start playing as quickly as traditional VOD content. The starting point is the "first mile," where the goal is to meet performance parameters achieved by traditional modes of primary distribution over satellite and legacy fiber transport with full backup redundancy, and the ability to quickly identify and mitigate issues.
The live video distribution chain showing multiple modes of delivery to consumers. (Image: Akamai; click to enlarge)
Understanding End-to-End Context for Distribution
For TV networks, motion picture studios, and other providers that want to maximize first-mile performance, determining best practices begins with understanding requirements and key performance indicators (KPIs) for latency, quality, redundancy and other factors. This stage of distribution covers all paths for content destined for OTT delivery from post-production output, including IP-based conduits to OTT affiliates and content delivery networks (CDNs) for direct-to-consumer (DTC) operations, as well as legacy transport to multichannel video programming distributors.
For linear content, latency must be minimized to the point that there's virtually no difference in reception timing over a traditional TV channel versus the internet. That means the 30-second delay between broadcast and reception by internet-connected devices must be cut to about 10 seconds. While linear latency requirements do not apply to on-demand content, other requirements including service targets, startup times, picture quality, re-buffering, and service availability must be considered regardless of use case.
Post-Production Content Formatting
The first step to maintaining end-to-end fidelity in resolution, color depth, bit depth, and other source material parameters is preparing content for initial post-production playout. But there are some nuances that need to be noted when it comes to preparing the content for OTT distribution.
Take interlaced content, for example. While many transcoders are capable of de-interlacing interlaced TV programming, de-interlacing is best performed at the source to maintain quality. Additionally, de-interlacing helps reduce degradations with modifications that alter content from its original state, like frame scaling.
Encoding at Playout
There are two main approaches to converting the production egress signal into an ingest format for first-mile distribution: mezzanine encoding and direct to origin encoding.
With mezzanine encoding, a single representation of the content is sent over the first mile to the ingest network, where it is prepared for delivery. Content providers must be mindful of the landscape of end user devices—bit-coding depth, frame rates, aspect ratios, and dynamic range settings—to meet requirements for delivering movies and TV programming. Bitrates for mezzanine-level output aim to achieve the highest level of quality required to maximize viewer experience. The specific bitrate/codec/settings combination used will be a function of the available first-mile bandwidth, the quality target, and expected generational loss in the content preparation chain. A general rule of thumb for well-designed transcoders is to expect 25%-40% generational loss for like codecs. Additionally, using standardized codecs ensures compatibility with common transcoding services and allows tried and tested analysis metrics to be leveraged.
Origin encoding best practice requires that content be delivered to origin servers in multiple bitrate profiles or renditions used with adaptive bitrate (ABR) streaming for optimum quality rendering in each device display category. This typically involves using at least seven different bitrate profiles, except for scenarios such as 3G and 2G mobile networks where four or five may be required due to access bandwidth limitations.
Choosing the Right IP Transport
When selecting a mode of IP transport, latency minimization and avoiding quality degradation should be the primary goals. Providers should first consider implementing the latest advancements in HTTP/2. Configuring content for HTTP allows providers to deliver the content to points of ingestion on origin servers in line with transmission control protocol (TCP)-based ABR streaming—eliminating a processing step at the transcoders.
With that said, relying on TCP delivery over first-mile distribution is often an impediment to meeting the goals set for broadcast TV-caliber delivery of OTT content. TCP has historically ensured IP packets reach their destinations and are properly sequenced in rendering by clients, resulting in high reliability—but also high latency due to packet flow interruptions that increase with higher bitrates. The longer the distance video signals travel over the Internet, the more interruptions and rebuffering events.
An alternative transport mode—user datagram protocol (UDP)—is a connectionless protocol that requires no communication between sender and client. UDP achieves lower latency and better bandwidth utilization than TCP, but can lose packets if blocked or dropped in the flow.
Recent UDP implementations, including the IETF QUIC standard, solidify the latency and utilization improvements without sacrificing the reliability of TCP.
Meeting Requirements for Quality Control
Maintaining quality control over first-mile distribution starts with multipath redundancy, which ensures content will be delivered without any interruption to all ingestion points. In the case of global live event streaming with audiences in the millions, like the Olympics or the World Cup, providers should have at minimum two fully diverse paths, and in practice three or more for delivering content.
Consistent performance is also required on an ongoing basis for routine 24x7 linear feeds, heightening the risk of interruption over a continuous time frame. A two-path redundancy model should be considered the baseline. Depending on the channel or content, operators may opt to consider three-path redundancy models as used for high-profile live event delivery.
Whatever the chosen redundancy, providers must maintain continuous performance monitoring and analysis through first-mile distribution points—and they must make sure that together with their internet service providers (ISPs), they can address anything that might cause a poor user experience. The ability to monitor telemetry feeds independently of the ISPs' internal quality control mechanisms is to gain visibility into the provided transport is also key. Aggregation and instant analysis of raw telemetry data can reduce time to mitigation by activating alternative paths before disruption occurs on the primary. Content providers can meet these operational challenges through monitoring, analytics and mitigation functions that are essential to maintaining control over end-to-end performance.
High-value video providers that adhere to these first-mile best practices can minimize any impediments on contribution paths that might undermine achievement of their goals—the most important of which is delivering a superior viewing experience.
[This is a vendor-contributed article from Akamai. Streaming Media accepts articles from vendors based solely on their value to our readers.]
Related Articles
Advances in monitoring and analytics anchor superior OTT user experiences. Here's a road map for pre-event testing that will ensure high-quality performance across the board.
04 Feb 2019
Video playback is where the rubber meets the road. Make sure you're using a player that gives you all the functionality you need to successfully deliver your high-quality content to a fragmented device universe.
17 Oct 2018
Steps for ensuring CDN performance meets audience expectations for OTT streaming
22 Aug 2018
The second installment in a series looking at best practices for delivering premium video content, this piece explores preparing OTT video for delivery over content delivery networks.
15 Jun 2018