Video: The Case for Specialized Hardware Acceleration in Cloud Services
Learn more about cloud encoding and delivery at the next Content Delivery Summit.
Watch Brent Yates's presentation, Changing the Economics of Streaming Delivery: Full-Stack Cloud Services Using Specialized Hardware Acceleration, on the Streaming Media Conference Video Portal.
Read the complete transcript from this video:
Brent Yates: We can't really talk about hardware without talking about software first, and Neil did a great job of building on this: The way software is being developed is changing. Everyone's going to the cloud, but the cloud is not a physical place. It's just a way of doing software and building these microservices, of designing for the problem, not for the hardware you're running the problem on.
Software designers want to be able to build a piece of software, and define what the requirements are, and then have that run on whichever server it happens to be, without having to know what OS is on it, how much RAM is on it, how many drives are on it, what processor is on it. Those are secondary concerns. What they care about is solving the business problem. So it seems straightforward that that's the way it should be, but it took us a long time to get to the perfect storm of having all the features to be able to build that kind of system.
Now with these automated orchestration layers, and languages, and tools, you can do that, you can focus on solving the problems, and let the automatic rules throw your software out at the edge. And in this case, out on the edge that is a service provided is part of a CDN. It gives you a couple of good features. It's easier to manage because you're focusing on the problem, not the hardware. You can have heterogeneous server deployments where they have different hardware requirements.
But the programmer doesn't have to know that. The software automatically figures out where to run it on in the most efficient way. And it allows you as a business to focus on the problem and quickly adapt to changing concerns rather than having to focus on hardware deployments.
Just to reiterate, the cloud can be public cloud like Neil said, or a hybrid, or in this case, a unique way of expanding the cloud out to edge nodes. And it all takes advantage of these high-CPU-core counts by breaking up the software from the big monolithic applications, and I'm pointing to Neil's slide from the second one he showed, where he had the big monolithic application and then broken up into microservices. Those lightweight micro-applications make it easier to adapt to your business logic, and it also means that there's a lot more of them.
There's an overhead, there's a friction with having thousands of microservices on your server. But that friction is outweighed by the efficiencies you gain in breaking your application up that way.
So why do you want to add specialized hardware to this type of deployment? Why don't just throw it out there on regular Intel hardware with 72 cores and just let it run? The reason is that CPUs are not increasing based on Moore's Law. Moore's law is dead. We're only getting about 3.5% improvement year over year on CPU performance. So you throw these things out there in your container, you're throwing more services, more problems, more chatty apps, but the only way you get faster is waiting 10 years for CPUs to change, or throw more servers at the problem.
Throwing more servers at the problem is not only capital-intensive; it's also wasteful. It's not green. It's a power problem. Microsoft can afford to spend $9 billion dollars in data centers, and Google can spend $10 billion dollars. But those are big capital deployments. It would be much better if we can make efficiency changes, and change the way we do business instead of just throwing servers at it.
Specialized hardware can do that.
We look at it as a capacity gap problem. On the bottom, the blue line is CPU performance, the red line is demand. Pretty much everyone we talked to, and we've been going around the room today, is talking about a 30-35% year-over-year gain in how many streams they have to revive, how many apps they have to run, how much data they're putting out. And the only way to solve that gap is to fill it with servers.
Related Articles
Signiant CMO Jon Finegold discusses Signiant's true cloud framework and the benefits of the emerging native cloud world in this clip from his presentation at Streaming Media West 2019.
04 Mar 2020
HellaStorm CTO Brent Yates discusses field-programmable gate arrays (FPGAs) and how they contribute flexibility to cloud acceleration in this clip from his presentation at Content Delivery Summit 2019.
05 Jun 2019
Limelight Networks' Neil Glazebrook makes the case for a shift to a new dynamic platform to deliver real-time apps in this clip from his presentation at Content Delivery Summit 2019.
03 Jun 2019
Limelight Networks' Neil Glazebrook discusses the dominance of real-time apps in everyday life in this clip from Content Delivery Summit 2019.
29 May 2019
Akamai Principal Architect Peter Chave projects the multi-codec landscape of 2022 and explains how Dynamic Codec Management will help OTT providers manage it in this clip from his keynote at Content Delivery Summit 2019.
27 May 2019
Akamai Principal Architect Peter Chave discusses bandwidth demands attendant to the inevitable (however unnecessary) arrival of 8K streaming in this clip from his keynote at Content Delivery Summit 2019.
22 May 2019
Fastly's Alicia Pritchett and Datazoom's Diane Strutner, co-founders of Women in Streaming Media, discuss the organization's mission, plans, and opportunities at Content Delivery Summit 2019.
20 May 2019
Akamai Principal Architect Peter Chave presents three steps for addressing the drift problem endemic to streaming to different players on different networks in this clip from his opening keynote at Content Delivery Summit 2019.
17 May 2019