-->
Save your FREE seat for Streaming Media Connect in February. Register Now!

Sweet Streams: Optimizing Video Delivery

Live Streaming

HLS is a video streaming concept. It has to sit on top of HTTP, which means it has to be able to break down your video stream into discrete chunks because HTTP can deal with only a discrete chunk of data, which means that HLS has to take this nice, lovely stream, slice it and dice it into different chunks, and then it has to tell the client, “This is the chunk you want. This is the chunk you want next. This is the chunk you want after that.” So it maintains an ordered scheme, and does this in two ways. It maintains what they call an M3U8 file. MP3URLUTF8 is its formal name, which doesn’t mean much to anybody. But that’s basically the playlist. That tells you the next chunk that you want to read.

If your playlist is out of order, what happens to your video? It plays back out of order. If the playlist is somehow screwed up, or it’s not current, it’s not up to date, then the player has no way to know what chunk to fetch next. If you’re doing live streaming kind, then it’s a really big deal. You don’t know what chunk you have to fetch next.

But the nice thing about it is that you can define multiple streams within it. So you have a very high-bandwidth stream, a medium-bandwidth stream, and a low bandwidth stream. This allows the client to make an educated guess about how much bandwidth it has available to it, and then choose the next sequence number in the sequence based on what it thinks the available bandwidth is. If you think you have a lot of bandwidth, make sure you get the high bandwidth-encoded stream. If you have low bandwidth, get the low bandwidth-encoded stream, and it will do so automatically for you.

TS is just transport stream. That just happens to be the file extension they chose for the actual chunk of video. So putting them together: watch a video, video gets broken up into chunks. I have a playlist which tells me which chunk to play next and what order. I can download these individual chunks discretely via a caching proxy server. Therefore I don’t have to worry if I’m halfway around the world. These caching servers are guaranteeing a high quality of experience because they are physically close. They have all these low-latency, high-quality connections off to the end user even if my origin server is a half a world away.

Hard-Learned Lessons

I want to conclude with two hard-learned lessons.

Caching policy is one of the hard problems in computer science. If you’re going to cache your HLS stream, what do you care about? First, you have to cache your .ts files--your actual chunks of video. Because they are worthless if they’re out of order, you can apply a very generous caching policy to them. Tell the caching server, “Please keep this around forever-and-a-day in case someone else wants it in the future. Make sure that it’s always present and ready to go.” I don’t have to worry so much about it because, again, if I don’t know the order in which to play them back, then it’s all disjointed and it doesn't make much sense.

So you want to cache those as long as you reasonably can. The real challenge, then, is your manifest (M3U8) file. If you’re doing static video streaming, the order is not going to change. You’re not generating anything new. You want to cache as long as you reasonably can. But if you’re going to do on-demand, this is where it gets tricky, and why we say that caching policy is a hard problem.

For on-demand, we’re always generating a brand new chunk, which means that we always have to update that M3U8 file. So you’re always getting whatever the latest chunk is. You have two options for this. The really clever one says, “Well, I want to reduce the load in my origin server, and therefore I’m going to apply a caching policy which is half the length of time of an individual chunk.”

If I have ten people coming in at a given second, we get the latest M3U8 file. If my individual chunks are one second, I say it’s good for a half-second. Half of those ten guys viewers get that first cached one. The second five viewers have to make a fresh pull to origin and they get the next one.

This way, you reduce the number of connections to your origin, reduce the number of hits to it. But at the same time, you’ve got the benefit of the caching, while at the same time, you have not changed the order. Your manifest files are still correct, and you’re always getting the latest one. So, your video stream stays exactly up to date and yet you still get the benefit of caching. The alternative is you don’t cache the manifest file, and on every single manifest file request, that flows straight through to origin and you’re done.

This article is Sponsored Content

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Sharpening the Edge: The Evolution of CDN and Cloud Security

Learn how StackPath is building an extensible platform at the cloud's edge to deliver on the past and future promise of edge services—including CDN and all its applications—and make the Internet safe.