-->

Why Hardware Acceleration is the Future of Cloud-Based Video Streaming

There is a well-documented explosion of online video traffic that’s been occurring for some time now. According to Cisco’s Virtual Networking Index reportlast June, online video could make up 82% of all online traffic by 2020, rising from 59 percent in 2014. Video is already far and away the largest source of network bandwidth consumption. And of course, bandwidth is costly, both for wired and wireless networks. This year Verizon is looking at capital expenditures of over $17 billion. When examined, this budget can be traced back to building out capacity for greater throughput.

Advanced Compression Driven by Growth of Wireless of Users, Streaming Content, and 4K

Broadband connectivity (wired and wireless) has played a large role in enabling this new "internet-of-Video-Things (IoVT)" world. So also has the evolution of bigger, better, and cheaper smartphone and tablet displays and sensor technologies, which collectively have created a perfect environment to wirelessly consume, generate, and share HD or even 4K video content. As billions of users have started to consume and generate massive volumes of video content, it has put significant pressure on the networks and their operators. At the same time, video services are highly valued by consumers and present new revenue opportunities for these operators. A prime example is the announcement this month from AT&T noting that its first 5G service will be a video streaming service.

To slow the need to build more, ever-higher capacity networks, video compression has become a critical technology to assist in the management of skyrocketing bandwidth requirements.

Each new codec standard for video compression has typically delivered about a 50% bandwidth savings over time, which has generally been sufficient to deal with the "next wave" of services, such as the migration from standard-definition to high-definition video. But this time around a number of things are fundamentally different. There is a much larger number of mobile devices consuming video. Many of these endpoint devices are upgraded at a much faster pace, and so the resolutions that many people hold in the palms of their hands today are comparable to what we all have in our living rooms. The HEVC (H.265) codec has offered some relief that the industry needed, but one can see already that there will quickly need to be something more.

Enter the Alliance for Open Media

Ten years ago, when video was consumed in the traditional fashion over a single network on a single device with few companies involved, the licensing model was relatively clean and mostly worked. With the new consumption model (any device/any time/any network) the number of endpoints has exploded, making the traditional licensing model unrealistic. A new codec business model is clearly needed to help promote and seed these new consumption devices and use cases. Google had founded the open source/royalty free model with their VPx series of codecs, and the formation of the Alliance for Open Media (AOM) and the AV1 codec builds upon this to address the coming need with what is expected to be a 50% compression efficiency improvement over today’s leading codec implementations.

The AV1 codec design goals are as follows:

  • Open and highly interoperable
  • Optimized for Internet delivery
  • Scalable to any modern device at any bandwidth
  • Low computational footprint to decode
  • Encoding optimized for hardware acceleration
  • Enables consistent, highest quality, real-time streaming
  • For both commercial and non-commercial (user-generated) content

Just as there is no free lunch, compression breakthroughs always come with a price:  the increased computing requirement to encode (and decode) video for efficient, high quality streaming. With each step forward in video services, there has been a corresponding rise in compute need on the encode side, mainly in the data center. This increase has been on the order of 8x when comparing encoding of SD using H.264 and then moving to HD utilizing HEVC, with another incremental with the move to UltraHD (HEVC). With the upcoming AV1, this increase in compute need will continue.

Hardware Acceleration for Cloud-Based Streaming

Dedicating CPUs to server-based encoding made sense back when most video was SD and a smaller percentage of the overall workload. In the near future, when video becomes 80% of network traffic and codec complexity is 1000x higher, a new specialized compute accelerator needs to be implemented to handle the job of encoding and processing video prior to streaming. Field-programmable gate arrays (FPGAs) are inherently good at video acceleration because of the flexibility they provide and a main reason why hardware acceleration companies like Xilinx were invited to join the Alliance for Open Media.

FPGAs are expected to accelerate AV1 encoding by at least a factor of 10x compared to software-based encoders that run on a CPU. Their programmable and reconfigurable capabilities allow for multiple optimizations across a wide range of encoding profiles in addition to enabling optimization for non-video workloads. As any codec evolves and improves over time, FPGAs ensure cloud data centers always have a state of the art video acceleration solution.

For more information on the Alliance for Open Media, go to www.aomedia.org   

For a quick technical background on video transcoding in the cloud with FPGAs, check out this video.

[This is a vendor-contributed article. Streaming Media accepts contributed articles from vendors based solely on their value to our audience.]

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Hardware-Based Transcoding Solutions Roundup: Testing Performance

We put hardware-based solutions from NVIDIA, Intel, and NGCodec to the test to see which offers the strongest performance and the highest quality.