The Need for Speed: Demand for Low-Latency Streaming Is High
Thomas Kramer, VP of product management at codec vendor MainConcept, feels that Apple’s move is complicating the ABR ecosystem, which has been slowly migrating toward CMAF. “Apple’s introduction of LL-HLS with the first sample code snippets showing the use of MPEG Transport Stream instead of fMP4 for CMAF and other proprietary extensions for low-latency indicates to me that Apple was not listening enough to the already ongoing adoption of CMAF in the marketplace in favor of their own ecosystem and playback devices.”
That’s a fair comment; Apple abandoned the transport stream format for HEVC, so why not do the same for low latency? Of course, as a developer, you get to choose your approach: You can use transport stream chunks for HLS and fMP4 for DASH clients and achieve maximum compatibility or use a single fMP4 stream for maximum efficiency and potentially lose some HLS low-latency viewers.
The second key difference between Apple’s scheme and earlier approaches is that it uses HTTP/2 rather than HTTP 1.1 chunked-transfer encoding. In his excellent blog post titled “The Community Gave Us Low-Latency Live Streaming. Then Apple Took It Away," Mux’s Phil Cluff notes, “The choice of technologies (namely HTTP/2) Apple has selected is going to make it really hard for non-Apple devices to implement LL-HLS.”
As a counterpoint, Akamai’s Law states, “The HTTP/2 requirement does raise the bar for support, although mostly for CDNs and older clients. The majority of 2019+ client devices support a H2 stack and as always it will be the legacy devices which struggle with H2. Apple may make H2 optional for delivery in order to accommodate these older devices. I have a working MSE application that happily plays LL-HLS in Chrome/Firefox/Safari browser along with H2 push. From early testing I can say that H2 push certainly helps with stability and I would be hesitant to go to wide-scale LL-HLS distribution without it.”
I circled back with Cluff, who noted that Apple’s approach complicated bandwidth estimation for non-Apple developers, but added, “Apple [has] been more helpful than I’ve ever seen on this project. I, and many others in the industry, have met with them several times, and they’ve already made changes to improve things, and there’s hopefully more coming.” In this regard, Apple released reference LL-HLS software for CMAF output on Sept. 11, 2019. So that’s encouraging.
On the encoding and packaging side, Apple’s new spec is already getting a lot of traction. Regarding encoding, Nikos Kyriopoulos, Media Excel’s VP of product and business development, noted that his company’s Hero encoder currently supports Ultra Low Latency DASH (based on CMAF) for ingest into Akamai and other delivery systems and that the company is working with partners to also support end-to-end Low-Latency HLS. According to Barry Owen, Wowza’s VP of solutions engineering, his company is prioritizing Apple’s low-latency schema first and then DASH “because we see a much larger percentage of HLS than DASH, whether it’s standard streaming or low latency streaming.”
On the user side, I spoke to several large OTT services about their low-latency plans. The consistent message was that they weren’t going to support multiple low-latency technologies and that they would delay their implementation plans until the competing technologies sort themselves out.
In this case, Apple seems to be working with the community to simplify the implementation of LL-HLS. Still, every new and different approach adds confusion to the market, which will take months to sort out.
Most of What You ‘Know’ About WebRTC Is Wrong
WebRTC started life as the technology underlying the initial, somewhat feeble, iterations of Google Hangouts, which left many with bad memories of very low-quality video. As a result, the common fictions about WebRTC are that it doesn’t scale, that ABR isn’t available, and that video is limited to low resolution and low quality.
The fact that WebRTC needs a persistent connection between the server and player does complicate scalability, but it’s a challenge large-scale streaming shops have solved within their own delivery infrastructure. That’s why to reach tens of thousands of viewers, you’ll need to use their CDN infrastructure or deploy their technology on your own CDN. According to Alexandre Gouaillard from CoSMo Software, the WebRTC-based Millicast system has served as many as 50,000 confluent users for a single Sotheby’s auction. Oliver Lietz of Nanocosmos claimed more than 100,000 viewers via WebSockets in one of the company’s recent events. So consider scalability a potential cost issue, not a capability issue.
Ditto for ABR and stream quality. The WebRTC spec can incorporate multiple streams in parallel, although not all WebRTC implementations support this. There are some browsers that limit resolution or stream bandwidth, but these aren’t absolute limitations of WebRTC.
Not all WebRTC-based services may offer all of these features, so it’s definitely something to check with each candidate service, but scalability, ABR, and quality aren’t absolute bars to selecting WebRTC. Whether they come at a cost equivalent to HAS systems is another matter altogether.
There Are Some Interesting New Approaches
Beyond the three categories of products noted, there are some pockets of innovation from various companies in the streaming ecosystem. One of the more interesting is THEO Technologies’ High Efficiency Streaming Protocol (HESP), which is based on a new HAS protocol. According to THEO Technologies’ website, it delivers sub-second latency, around a 100-millisecond time to first frame, and viewer bandwidth optimizations of up to 20%. It is also compatible with HTTP CDNs.
While proprietary technologies certainly have their challenges, HESP appears to provide the latency of WebRTC with the scalability of HAS solutions. If you’re looking for subsecond latency, HESP is worth checking out.
You’re Choosing a Service Provider, Not an Acronym
According to Nanocosmos’s Lietz, his customers don’t ask which technology to use; they’re seeking a well-featured and affordable end-to-end platform with the lowest possible latency. So I’ll close with some questions to ask to sort this out. Thanks to Bill Wishon, chief product officer at Phenix, for identifying most of the points on this list, which obviously don’t all apply to all use cases.
Questions to Ask About Low-Latency Technologies:
- Is the technology adoptive? If so, how many streams, and are there any relevant bitrate or resolution limitations?
- What are the quality limitations, if any? Baseline profile only? No B-frames or look-ahead buffer?
- Is there a download required? (Most systems don’t require this, but it’s worth asking.)
- Can the system synchronize all viewers to the same point in the stream? (Streams can drift over locations and devices, and without this capability, users on a fast connection may have an advantage for auctions or gambling.)
- Can it get through firewalls? (HTTP-based systems use HTTP protocols, which are firewall-friendly. Others employ User Datagram Protocol, which may not be.) If User Datagram Protocol, are there any backchannels to deliver to blocked viewers?
- What content protection is available?
- Can the system scale to meet your target viewer numbers? Is the CDN infrastructure private, and if so, can it deliver to all relevant viewers in all relevant markets?
- Can you use your own player, or do you have to use the system’s player? If your own, what changes are required, and how much will that cost?
- What’s needed for mobile playback? Will it play in the browser, or is an app required?
- What additional platforms are supported (set-top boxes, dongles, OTT devices, smart TVs)?
- What’s the latency achievable at a scale relevant to your broadcast?
- What’s the overall cost for the event?
- Can the content be reused for VOD, or will re-encoding be required?
- What are the redundancy options?
- Are captions available?
- What about advertising insertion?
- What about DVR?
- Which encoders can the system use?
[This article appears in the October 2019 issue of Streaming Media Magazine as "The Need for Speed."]
Related Articles
In the AV world, zero-frame latency isn't just a pipe dream—it's a requirement. Here's why the streaming industry would do well to pay attention.
20 Sep 2019
NGCodec CEO, Founder & President Oliver Gunasekara breaks down the low-latency landscape for distribution in this clip from a Live Streaming Summit panel at Streaming Media East 2019.
14 Aug 2019
Sometimes low latency is critical, but in other streaming applications it's not worth prioritizing, Wowza Senior Solutions Engineer Tim Dougherty argues in this clip from Streaming Media West 2018.
06 May 2019
It's not a standard yet, but that will likely change. Here's a detailed look at the state of WebRTC, the project that could finally deliver instantaneous video streaming at scale.
19 Apr 2019
Companies and Suppliers Mentioned