Three CDN Strategies To Lower Live Streaming Latency
Live streaming has become an integral part of our daily lives, transforming the way we consume entertainment, interact with others, and experience events in real-time. From social media platforms to online gaming, live sports, and e-learning, the demand for seamless and high-quality live streaming experiences continues to grow exponentially. As the industry continues to evolve, Content Delivery Networks (CDNs) play a crucial role in ensuring that live streams are delivered efficiently, reliably, and with minimal latency. In this article, we will explore three key CDN strategies that can help lower latency and enhance the overall live streaming experience.
During live streaming, video content is typically transmitted from the origin to the player in the form of encoded segments. These segments are small, manageable chunks of video data that are sequentially delivered to ensure smooth playback. Each segment typically contains 3-6 seconds of video, a length that balances the need for low latency with efficient buffering. To enhance performance and reduce latency, these segments are cached in CDNs. CDNs are strategically distributed networks of servers that store copies of the video segments closer to the end-users. By caching content at multiple locations, CDNs minimize the distance data must travel, thereby reducing latency and improving the overall streaming experience for viewers. However, this approach typically results in a minimum of 3 seconds of end-to-end latency (assuming 3s segments) for viewers, as the player must wait for an entire segment to be received before playback can begin. While it might seem beneficial to use smaller segments to reduce latency further, doing so can lead to increased overhead and inefficiencies. Smaller segments require more frequent requests and acknowledgments, which can strain network resources and potentially lead to buffering issues, especially under varying network conditions.
Chunked transfer encoding
Introduced as a streaming mechanism in HTTP/1.1, Chunked Transfer Encoding addresses the latency challenges of traditional segment-based streaming. This technique breaks down video segments into smaller chunks that are sent to the player as they are encoded, rather than waiting for the entire segment to be completed. By streaming these chunks in real-time, chunked transfer encoding reduces the delay between the origin and the player. This approach lowers latency by allowing the video player to start playback as soon as the first chunk of a segment is received. As a result, viewers experience a more immediate streaming experience, bringing live broadcasts closer to real-time. Importantly, even though segments are broken into smaller chunks, this method does not add overhead because the chunks are part of the same HTTP request, avoiding the need for additional connections or headers.
CDN optimization #1: Content awareness
CDNs typically operate in a pass-through manner, meaning they are not inherently aware of the specifics of the content they serve. They primarily focus on efficiently caching and delivering content based on user requests, without delving into the content's structure or semantics. However, live streaming presents a unique opportunity due to its predictable pattern. For instance, in HTTP Live Streaming (HLS), manifests often contain sequentially named segments, such as segment1.ts, segment2.ts, and so on. This predictability can be leveraged by pull model CDNs, which can proactively read the manifest and cache upcoming segments before they are requested by clients. By doing so, CDNs can reduce latency and improve the streaming experience, ensuring that segments are readily available at the edge, minimizing the time viewers spend waiting for content to load.
CDN optimization #2: Partial object caching
To effectively support chunked transfer encoding, CDNs need to adopt caching strategies that focus on caching and serving individual chunks of an encoded segment rather than entire segments. This approach allows CDNs to begin caching a segment before all its chunks are available, continuously updating the segment as new chunks arrive. As a result, clients requesting the segment can receive the chunks in real time, ensuring a more immediate and seamless streaming experience. By dynamically managing and delivering these smaller data units, CDNs can reduce latency and buffering. Furthermore, edge caching strategies can be enhanced to store these chunks closer to end-users, minimizing the distance data must travel and further reducing latency.
CDN optimization #3: Asynchronous Network I/O
CDN servers are typically configured with a fixed number of network threads to handle requests from clients. In a traditional synchronous I/O model, a thread processes each incoming request sequentially. When a player requests a segment, a thread reads the entire segment from the origin and sends it back, before moving on to the next request. This model is efficient when segments are readily available, as the thread can quickly complete the task and move on to the next request. However, in scenarios where segments are streamed over time, such as with Chunked Transfer Encoding, threads remain occupied for longer periods, resulting in thread exhaustion. Async network I/O offers a solution to this challenge by allowing each thread to be freed up while waiting for new chunks from the origin. Instead of getting blocked, threads can handle requests from other clients, leading to more efficient resource utilization and lower infrastructure cost.
[Editor's note: This is a contributed article from Akhil Ramachandran. Streaming Media accepts vendor bylines based solely on their value to our readers.]
Related Articles
This report expands on live presentations from a recent Streaming Connect keynote.
10 Dec 2024
When it comes to implementing streaming tech for large-scale, high-stakes live sports, often decisions around managing latency are driven as much by cost concerns as network conditions, audience expectations, and the like. Globo Digital Products, Platform & Adtech Manager Jonas Ribeiro reveals the latency Globo delivers on typical sports streams at scale and what factors into those numbers in this discussion with Eyevinn Technology's Magnus Svensson at Streaming Media Connect 2024.
15 Apr 2024
Chris Packard, Global Live Operations Lead at LinkedIn, discusses the role of interactivity in enterprise streaming, what the essential elements are of a successful user experience, and striking a realistic balance between ultra-low latency and synchronization, in this discussion with nanocosmos' Oliver Lietz and Help Me Stream's Tim Siglin from Streaming Media Connect 2024.
04 Mar 2024
In the online world, when you combine latency and quality, these are usually 2 things that could never go hand in hand. The rule of thumb said, if you want quality, you can't have low latency or if you want low latency you can't get "good quality." That's all changed now… in one click.
12 Jan 2023
Companies and Suppliers Mentioned