-->

Video: 360 Immersive Live Streaming Workflow: Part 2, Processing and Distribution

Learn more at Streaming Media East!

Read the complete transcript of this clip:

Shawn Pryzbilla: So how do we get it to our processing infrastructure? There have been a couple of great talks. And there's an upcoming one I think tomorrow on SRT about this problem so I won't go into it too much here. But the gist is there's a lot of protocols, and it depends on your infrastructure.

I think from an educational perspective, RTMP is a great way to get started, and that's what our reference architecture uses. It has some nice properties in the sense that it's integrated with a lot of free and open source software. But it has some downsides in the sense that it's no longer supported and isn't going to continue to be supported by Adobe.

So once we get it to the cloud, there are a few options for processing. We can use AWS Elemental to transcode with Elemental Live. For video on demand, we can use Elastic Transcoder. There are plenty of options in the AWS Marketplace for encoding and distribution. Products like Wowza and Bitmovin and things like that. Or we can build it ourselves using traditional EC2 infrastructure. So there's a whole plethora of options here.

And then for delivery, pyramid mapping. But when you're delivering 4K, and the user is only seeing 720p, the resolution scale is much larger than your traditional flat, fixed video. So you need a lot more bits to deliver a compelling experience to the end user. And that has massive ramifications on the bandwidth costs to deliver, say, OTT streams of 360 degree video to end users.

And so there's been some work done to explore, for instance, 180 degree streaming to, say, cut down on a lot of the costs of delivering immersive streaming solutions. And as I mentioned, view-dependent adaptive streaming is a term that's popping up as well, so basically, context aware of where the user's looking, speeding up the bit rate there and lowering the bit rate where they're not actually perceiving much quality benefit.

There's a bunch of different ways you can render 360 degree video. There's a lot of open source players out there and open source software. There's also a bunch of commercial software. Commercial players that support it. In the reference architecture I've chosen to use Mozilla A-Frame and hls.js simply because I have aspirations to move this to more of a VR experience in time.

For instance, Mozilla A-Frame is not just a video player. It's a web VR-based declarative framework for defining three-dimensional objects in the web browser. But one of the things that's really cool about it is you can simply render a video sphere and then just project HLS streams into that video sphere really easily.

The other thing that it comes with basically out of the box is head mount display support. So you can use it in a browser. That's kind of the workshop in the reference architecture. But you could also simply click to launch in a head mount display as well. So I'd like to see more immersive video experiences taking advantage of controller support, native headmount display support, and things like that. And I think A-Frame is a pretty cool tool to enable that.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

How to Improve Live Video Workflows Through Optimized Root Cause Analysis

Zixi has improved live video workflows through their specialized Software-Defined Video Platform, which uses dynamic machine learning and an automated analytics approach to Root Cause Analysis to assist with faster team problem-solving collaboration

Video: 360 Immersive Live Streaming Workflow: Part 1, Capture and Stitching

How does 360 immersive differ from other live streaming workflows? AWS Special Solutions Architect Shawn Przybilla takes viewers step by step through the process, beginning with capture and stitching in Part 1 of this two-part series.

Companies and Suppliers Mentioned