-->
Save your FREE seat for Streaming Media Connect in February. Register Now!

The State of the Stack

Article Featured Image

An understanding of how ISPs, telcos, CDNs, and online video platforms fit into each layer of the online video stack is crucial to making smart purchasing decisions. What follows is an overview of what fits where and how the various players' roles are changing.

What Is the Online Video Stack?

First, let me share how I am using the concept/term "stack" through this article. The open systems interconnection (OSI) stack is a relatively familiar model used worldwide by network engineers to define roles between functions within the network, grouping them together and, by doing so, helping to define "interfaces" between them all (protocols) that help create interoperability between vendor implementations; it's unlikely you are reading this if this information is new to you.

I am going to explain a little more about the CDN operator comparisons I have included and what I mean by them, but before going into that detail, we need to also fix a perspective on the CDN aspects we are focusing on here. It's important to note that not all CDNs handle video; many handle games services, software updates, webpage proxying, and application acceleration.

For the purpose of this article, I am looking at CDNs that provide internet video and audio delivery services among their offerings (both on-demand and live). I am interested in the ability of a CDN to deliver live streaming in particular as a prequalifier, since on-demand services are neither terribly complex nor do they require much more than a basic hosting and distribution architecture. Sure, there are many ways to do on-demand hosting badly, and many ways to do it well, but it's generally just about buying bandwidth, racks, and servers. To increase the capacity, add more of each.

The ability to, in real time, cache video-on-demand (VOD) content "live" as copies of it are transferred from their sources to the edges and "through" the cache (so later copies can be delivered from the cache) also helps highlight how high-speed telecom networks are very much at the heart of well-run networks. A bad network in a VOD CDN will cause problems for the initial users of the content who are playing it from source. However, subsequent users, watching the cache, will probably not suffer as much from the consequences of bad networking since the cached content is "closer" to them. Given that this will be the case for most of the users, in general, it's relatively easy to build low-cost VOD CDN, even using public, busy, cheap internet routing between the caches.

These days, however, even the most congested routes are generally capable of offering a perfectly usable file distribution network, far above the capacity requirements of the general public's need for videos distributed using even simple web services to offer "progressive download." YouTube is a testament to this. While there is still some way to go from a broadcast perfectionist's perspective, the internet has evolved so far in the past decade that the value proposition of localizing servers to ensure delivery and quality at lower cost and higher SLA than the alternative (which would appear to be simply putting lots of servers online on-site at the source of the stream/publisher's offices and serving from there) has become considerably less powerful as a proposition over the past few years.

This is interesting when you think back 5 or 10 years to the number of CDNs that were almost "trading off" the number of edge cache locations they had.

Live streaming places a different and more sophisticated load on the network designers and operators: sufficient capacity with flexibility in configuration, but at the same time not over provisioned/underutilized to the extent that it is uneconomical. Live streaming architecture is very involved with telecoms and not just limited to store and forward internet content delivery between servers and eventually end users. Tight Layer 3 (IP) network operations are therefore at the core of live streaming offerings at scale. CDNs still buy from Layer 2 and Layer 3 telcos, and the telcos buy into peering facilities. The more they buy, the lower their direct cost base.

Today, good, low-cost IP connections are abundantly available, and the trend is toward longer backhaul of IP from users to fewer, better-connected, centralized data centres managing the session and application and keeping the majority of the functionality in one place. In its pure form, this has been nicknamed the "death star" architecture, although it is more
properly/ traditionally called a "hub and spoke" architecture. The downside is that a local problem at the hub takes out everything where in a distributed architecture there are fewer risks of centralized points of failure. The upside is you can centralize and optimize all your maintenance and support issues. It's worth noting that, in all high-availability CDN service architectures, the hub itself is usually a geographically distributed range of services clustered to act like a single central hub to ensure continuity in the event of even the most significant problems.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Companies and Suppliers Mentioned