Video: How to Reduce Latency for Mobile VR Streaming
Learn more avout VR streaming at Streaming Media West.
Read the complete transcript of this clip:
Satender Saroha: Building the mobile VR experience, the biggest challenge is to do true motion-to-photon latency on mobile. the latency between the physical movement of a user's head and the time it takes for the updated photons from your head-mounted display to reach your eyes is called as motion-to-photon latency. ideally, this should be less than 30 milliseconds. True devices like Oculus Rift, Vive, they already provided in less than 35 milliseconds, but all the mobile devices, especially the RED VR, have a latency of approximately 90 milliseconds.
How do we solve this? What is the real challenge there? Traditionally, on mobile devices, rendering is double-buffered, which essentially means two buffers, stored in GPU memory. One is currently being scanned out, so there are two steps, rendering and scanning out. What you see on the display is what is getting scanned out. That scanned-out display is called “front buffer” and one that is being rendered to is the “back buffer.”
The GPU will never render the same buffer that’s being scanned out, so this has the advantage to prevent the artifacts, but at the same time in VR, because we are trying to show as soon as the user moves his head, the side effect is this leads to increase latency. An alternative approach is to, what we call a single rendering is to render directly to the front buffer, but time things out so carefully that you have rendered each line of the image just shortly before the scan out is going out of the display. So here we use a technique which provides scan-line racing. Essentially, scan line racing is a process of knowing where the scan line is in the screen and just rendering before that not overshooting that to avoid the artifacts.
That solves the problem of motion-to-photon latency by going for single rendering on top of scan line racing. And we have used or using the data in VRPAPK for the VR experience.
Related Articles
RealEyes' David Hassoun discusses what low latency is and what it isn't, and sets reasonable expectations for the current content delivery climate.
29 Mar 2018
Limelight's Charlie Kraus discusses three emerging strategies for delivering low-latency live streaming in the post-Flash era.
15 Mar 2018
Streaming Video Alliance's Jason Thibeault and Limelight's Charley Thomas address the question of whether WebRTC provides a viable solution for network latency issues in this panel from Live Streaming Summit.
27 Nov 2017
Nokia's Devon Copley discusses the challenges throughout the pipeline for streaming 6K or 8K VR/360, and how viewport-adaptive streaming provides an interim solution.
16 Nov 2017
StackPath's Nathan Moore explains the protocols, latency, and bandwidth challenges inherent to delivering video content to iOS devices and how content providers can stream to these devices more effectively.
18 Sep 2017
Wowza Senior Product Manager Jamie Sherry discusses key latency considerations and ways to address them at every stage in the content creation and delivery workflow.
24 Jul 2017
Wowza's Mike Talvensaari confronts the myth that low latency for large-scale streaming is always worth the expense, and discusses various applications and use cases along a continuum of latency requirements for effective delivery.
03 Jul 2017
Reel Solver's Tim Siglin, Rainbow Broadband's Russ Ham, and Verizon's Daniel Sanders discuss how attacks on Net Neutrality would impact video delivery in general and latency in particular.
19 Jun 2017
Ooyala's Paula Minardi and Level 3's Jon Alexander discuss the key issues facing live streaming and VOD providers regarding latency times, buffering, and meeting evolving viewer expectations.
06 Apr 2017