-->
Save your FREE seat for Streaming Media Connect in February. Register Now!

What Low Latency Is and How to Measure It

Learn more about streaming latency at Streaming Media East.

See complete videos and other highlights from Streaming Media West Connect on Streaming Media's YouTube channel.

Read the complete transcript of this video:

Robert Reinhardt: If you need under six seconds of latency, regular old HLS might get you there. I've done plenty of two-second chunk, three-second trunk manifest setups where you're getting around six seconds, maybe 10. It might drift to 10. Depending on what kind of edge origin architecture you set up, getting six might be difficult to pull off, but you can actually get much lower than 30 seconds latency, which is the "standard" for HLS. I've done a lot of custom Wowza setups where we're having two-second chunk sizes, and three-segment manifests and that shaves enough latency off. A lot of live broadcasts don't need super-low latency. If you need under three seconds, this new low-latency HLS spec that Apple has been working on since the end of last year, Roger Pantos talked about it in a Streaming Media keynote.

You might be able to pull that off with low-latency HLS. If you need some one-second sub-500 millisecond, then WebRTC is pretty much your only option, especially when you're going with the browser. So you've gotta work with what you've got. And again, WebRTC has come a long way. It just doesn't move as quickly as some past runtime architectures like Flash, right, because you've gotta get all these stakeholders moving together, all these behemoth browsers all moving forward together.

So let's talk about latency real quick. How do you measure glass-to-glass latency? There are different ways to do it. Sometimes you can get timestamps embedded in your WebRTC outbound and your publish ingest, so you can look at it at the server and look at it on the client and do some very calculated, measurable WebRTC roundtrip times. And that could be an option.

What I usually do--and it's very quick and dirty--is just to use a burn-in timecode on a test stream. It's great for testing, but you're not going to be able to measure that in production because not everyone's going to be streaming with burned-in timecode. And I'll show you an example of that in just a moment.

Counting tests are a very quick and easy way to see how good your latency is. What that basically means is, if I'm in a Zoom call or any kind of videoconferencing app or proof of concept, I could try to do a rapid count. So I'll say one, whoever's joining a conference with me will say two, and we'll try to quickly follow with the next number as we count up. And if there's a long delay between the perception of when I say a number and when I hear it come back from the other person, then I get a really good sense right away of how bad that latency is. If we could talk on top of each other almost, then that's fantastic. Again, that's quick and dirty, so it's not as measurable as some of this other stuff that I mentioned.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Limelight Brings Even Lower Latency to Realtime Streaming

Award-winning Limelight Realtime Streaming now offers increased global scalability and bi-directional data sharing, enabling new, innovative online video business models

Low Latency Video Streaming - How Low Can You Go?

Advances in chipsets allow for more processing at the edge, so we're going to see latency times drop even further in the near future.

Low-Latency Streaming Checklist

VideoRx CTO leads viewers through a checklist for prioritizing and achieving low-latency streaming in this clip from his presentation at Streaming Media West Connect 2020.

Companies and Suppliers Mentioned