-->

The State of Edge Compute and Delivery 2024

Article Featured Image

From aggregation to smaller, regional data centers, edge continues to influence stream­ing approaches. Looking back over articles from the past few Streaming Media Source­books, as well as about a dozen surveys we’ve done in conjunction with StreamingMedia.com on a variety of streaming topics, the term “edge compute” surfaces frequently in con­versations and responses.

In the 2022 Sourcebook, I mentioned that edge compute was the buzzword topic. While it’s been out­paced somewhat by AI in the past year—which is neither artificial nor intelligent, despite the rising sophistication of machine learning algorithms—in­terest in edge compute remains high.

In this year’s State of Edge Compute and Delivery, I’m going to explore the three edges that seem to matter most for streaming: aggregation, on-prem, and regional.

Aggregation

To illustrate the edge computing concept of aggre­gation, consider the so-called Internet of Things la­bel that we’re bombarded with in our private home lives. The term’s a bit of a misnomer, as most of these “things” require a connection to a central server solution that’s proprietary between a given thing and the manufacturer of that particular type of thing. Think of refrigerators talking to their manufactur­er’s platform, while the connected robot vacuum on the floor next to the fridge communicates with its manufacturer’s completely different platform. Both use the same transit path out of the home (typically Wi-Fi connections via a fiber or cable modem), but the things can’t communicate with one another un­less all of them are manufactured and serviced by the same company, in what most refer to as “ecosys­tems” these days instead of the “walled garden” ter­minology of the past.

The lack of edge communication is often the same with your mobile device ecosystem. Apple products connect easily with other Apple products, thanks to the tie-in of iCloud aggregation between devices, but they often can’t talk directly to one another while sit­ting in the same room, thereby requiring costly and slow uploading and downloading of content between the two devices. The same is true for Android prod­ucts, but exacerbated even more since the ecosystem limitations are narrowed down to manufacturer-specific devices.

Edge computing offers a potential way to cluster ac­cess to those discrete services, in the form of distinct microservices at the edge for each ecosystem, while at the same time lowering overall latency—by main­taining discrete authentications for each “thing” to a physically closer set of servers—as well as lowering the risk of a centralized data breach impacting the entire customer set.

One subset of this aggregation, which is playing out in the European Union with regard to app stores, is the increasing number of biometric authentications being required to access disparate devices. Given the fact that a number of countries have unique privacy and content-sharing stipulations, both for biometric and other user-specific data, and that these stipula­tions apply to any content that sits within the country’s geographic jurisdictional boundaries, regardless of where an end user resides, there’s an even more com­pelling reason to consider edge compute for authen­tication purposes.

The secure access service edge (SASE) premise has a direct impact on streaming authentication, poten­tially eliminating the authentication Achilles’ heel that’s plagued high-profile pay-per-view live streams, even when the stream itself worked fine. Now this ap­proach appears to be gaining traction.

While a 2021 projection by Gartner showed that the SASE market would grow from 2021 to 2025 at a com­pound annual growth rate (CAGR) of 36% to almost $15 billion by 2025, this was a scaled-back projection from a year earlier, when Gartner had predicted a CAGR of 42%, reaching almost $11 billion by 2024. Other recent predictions have also lowered expecta­tions, with one recent report projecting SASE would see a CAGR of about 28% from 2022 through 2027, but end with a value of only $5.3 billion.

Regardless of the growth rate, two benefits come into play when SASE solutions are deployed. First, there’s the breadth of potential applications. “Edge computing continues to play a key role in an increas­ing number of biometric applications involving vari­ous modalities,” according to Chris Burt, managing editor of Biometric Update.

Second, edge computing has the power to lower overall power consumption in certain parts of the streaming media workflow. “Advanced processors running on low power and algorithms with small pro­cessing footprints have unlocked a range of possible use cases and features for biometrics that were pre­viously impossible,” says Burt.

Content Privacy Benefits

As the promise of edge computing grows, authen­tication engines will potentially allow more con­strained storage of sensitive content at the edge rath­er than at a massive, generic cloud computing data center. Security remains a concern for cloud-based delivery of sensitive or restricted content. In addi­tion, as mentioned earlier, a number of countries have unique privacy and content-sharing stipulations that apply to any content that sits within their geographic jurisdictional boundaries.

That’s where edge authentication might just shine, and I expect we’ll see a ramping up of per-asset au­thentication solutions to address both enterprise (with its still historically high percentage of em­ployees working from outside the traditional enter­prise office space) and social user-generated me­dia content.

In addition, edge-based aggregation can take on additional forms, such as more finely tuned recom­mendation engines in a specific geographic region. Analysts are constantly trying to pinpoint consumer behavior trends in smaller markets, as tastes vary widely even across little segments of a larger city (think Chinatown versus Haight-Ashbury versus SoMa in San Francisco).

Deriving too small a sample size, however, raises privacy concerns. Therefore, a model going forward would be to generate recommendations based on like-kind pockets across a variety of cities (think China­town in San Francisco as well as Flushing, New York), where the anonymized consumer behavior from each edge location is compared to edge data from other locations as a way to further optimize recommenda­tions—and caching decisions, as a cost-savings mea­sure—based on similar communities rather than de­livering all the content from a central location.

On-Prem

The trend toward more granular data also has a corollary for streaming: The closer the server, the lower the latency. This has given rise to the concept of micro data centers for educational and enterprise use cases.

“Micro data center deployments are expected to be commonly deployed to power smart buildings or cam­puses for institutions or enterprises,” says Burt. “The actual server racks could be found contained inside a closet, the room that has traditionally held on-prem­ise servers, or even somewhere in plain sight, as they are neither noisy nor visually obvious.”

One of the research charities I’m involved with, Help Me Reconnect, has been exploring the concept of using lateral filing cabi­nets—the typical tan or white two-to-five-drawer fil­ing cabinet that’s been a staple of legal or medical offices for decades—as a way to embed micro data centers into existing office environments. To Burt’s point, these cabinets just blend into the background and provide a way to increase edge compute capa­bilities without requiring additional floor space. At Help Me Reconnect, we’ve merged donated lateral cabinets and donated desktop mainboards to create an environmentally friendly reuse of old technology to meet applications as varied as electronic medical records and educational campus media servers at almost inaudible noise levels.

From an enterprise standpoint, edge compute will enhance, rather than replace, enterprise content delivery networks (eCDNs) due to advancements in caching optimization. In fact, caching via an eCDN will become more important in terms of bandwidth reductions within corporate facilities, as more and more of this on-demand content is stored outside the traditional corporate firewall. As edge caching is further optimized in an on-prem model, additional resources could be focused on ramping up the use of multicast networks within small segments of the corporate firewall via software-defined networking (SDN), virtual local area networks (VLANs), or other similar approaches.

Lowering Latency to Near-Zero Thresholds

Along those same lines, the rise of videoconfer­encing means that there will be greater adoption of WebRTC and low-latency video delivery within the corporate network. To best accomplish scalability for low-latency, bidirectional video calls that traverse the corporate firewall—in which an equal number of participants in a videoconference could be in a single corporate location, communicating with col­leagues who are working from home—enterprise vid­eo teams may end up rebroadcasting the WebRTC-based session to a number of internal corporate par­ticipants as a viewer-only stream.

Since most of the live content for a videoconfer­encing call occurs outside of the corporate network, there’s less reliance on the virtual private network (VPN) tunneling that most workers are familiar with when remotely logging on to the corporate network for email, internal communications presentations, or accessing a shared folder. However, for on-demand content, an eCDN still makes sense. After all, if there’s a training video to watch, and the user is logged in remotely over the VPN, the utilization of video cach­ing means that VPN users retrieve the requested video from the cache, over VPN, rather than having to stream the content from the internet through the VPN. Essentially, caching halves the data require­ments for streaming content over the VPN.

Small, Regional Data Centers

Pulling back slightly from on-prem edge scenar­ios, is there common ground between fully local and fully centralized content aggregation and de­livery? To answer this question, first let’s look at the centralized model. The edge compute that most services often pitch sits, without irony, in a large, central data center. That’s typically true for one of three reasons:

  • A cloud service provider’s ability to spin up more resources for its overall customer base, as well as load-balancing for peak traffic models
  • A desire for cloud-based services to aggregate user data at a central location (I’ve covered the downfalls of this earlier in the article.)
  • A lack of smaller, regional data centers or at least the lack of space in existing, smaller data centers

While the first two are convenience plays on the part of the cloud service provider or its customers, the last one is a tougher challenge to solve. In many ways, while the streaming industry acknowledges that latencies could be lowered by moving toward edge computing—along with bringing benefits like aggregation and authentication—few of the estab­lished players in the market have compelling reasons to move from a centralized data center approach to a more regional approach.

Fortunately, two trends are at work to change that. The first, as pointed out by State of Streaming survey respondents, is the continuing need to drive down latencies, especially as live-streaming trends displace classic streaming on-demand deliv­ery bias.

Physical distance is the primary factor in latency, so it almost goes without saying that servers located closer to the end user inherently lower the transit time. The lower the transit distance, the lower the likelihood of network congestion, which further en­hances the low-latency edge compute premise that’s directly correlative to smaller, regional data centers.

The second trend is synchronization. For all of the talk of latency in live streaming, a primary short­fall of live-streaming events on best-effort networks (i.e., the classic internet) is a lack of synchronization of content between two devices in the same physical location. A number of companies have developed con­cepts to address latency, but a much smaller group of solutions addresses synchronization.

Interestingly, it’s likely that synchronization is most critical within a small geographic footprint. While there are scenarios in which your friend in another city is watching the same game as you, re­ceiving it at a latency a few seconds faster than you do and communicating the same to you via text or a messaging app, the majority of complaints we’ve heard across more than a dozen surveys in the past 2 years center on synchronization within hearing dis­tance. It’s as basic as hearing your neighbor yell in delight when their team scores, while you’re waiting for a few seconds to see the same score on your home device. This is a scenario that’s easily solved via edge compute streaming synchronization.

On the work front, there’s an even more compel­ling reason for edge synchronization: bring your own device (BYOD) meets multiscreen streaming needs. The BYOD approach of a decade ago is far different from today’s mobile computing scenario, as the major mobile operating systems now fully embrace sand­boxing of each individual application. In the corpo­rate world, this is critical, as the ability to sandbox a corporate application means that IT can rest easier when employees want to access sensitive corporate data from their own devices.

But also consider synchronization in the mix. If you’re involved in any work-related corporate meet­ings, online seminars/trade shows, or even just a vid­eoconferencing call that you’ve joined from home, there’s often a limited amount of screen real estate on which to do the three things any good multitask­er would do during the workday: join the meeting, follow the meeting, and prepare for your next meet­ing. In the old days of corporate teleconferences, you could multitask fairly easily, but these days with vid­eoconferencing, you either need to fully engage your screen to be a part of the meeting, turn off your cam­era and minimize the meeting to be able to get work done, or use two devices to be part of the meeting and also get your own work done.

The third approach has led to a rise in meeting par­ticipants wanting to use a second device (such as a smartphone or tablet) to join the meeting, while leav­ing their primary work computer free to be used for actual work. And that has led to a need to absolutely, positively make sure the content is synchronized be­tween the two devices.

A final note on smaller, regional data centers. Start­ing in 2022, we’ve asked respondents to the biannual State of Streaming survey to tell us about their future plans. More than half of respondents indicate that their plans include rolling out future business in smaller-footprint data centers. That percentage has grown with each subse­quent State of Streaming survey, and this is where I see the practical uptake in edge computing occurring over the next 5 years.

Conclusion

There’s one other area to consider when it comes to edge compute, and that’s the continued use of private clouds. Private clouds, which often offer more scal­ability compared to on-prem infrastructure, due to the fact that they can be scaled up or down as needed without requiring additional hardware, seem to offer a potential sweet spot in which edge computing can thrive. This cloud-first approach could just as easily be instantiated in localized markets using a combi­nation of smaller, regional data centers, with on-prem equipment as the fallback mode.

Alternatively, the private cloud could be used in an on-prem-first approach, reserving the cloud por­tion for scaling up when demand requires additional throughput and storage at a lower capital expense.

Streaming Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

State of Streaming Autumn 2023 Survey Report Launches

Newest version of twice-annual report expands on "pain points" and growth patterns, available for download now.

State of Streaming Snapshot: Pinpointing Streaming’s Pricing Pain Points

Rising fundamental costs morph into rising platform costs for content owners and consumers

CDN77's Juraj Kacaba Talks Low-Latency Streaming and the Edge

CDN77's Juraj Kacaba sits down with Tim Siglin to discuss low-latency streaming and the edge in this interview from Streaming Media East 2023.

Not So FAST? Spring 2023 State of Streaming Survey Says...

New 2023 Spring edition of State of Streaming survey launches, shows shifting focus

The State of Enterprise Video 2023

From road warriors to remote-only companies, enterprise video is shifting. It is, to put it mildly, in a state of flux. At the same time, though, so is the state of business. Given those two facts, the question I'll attempt to answer in this year's State of Enterprise Video is whether these coexistent movements will occur in lockstep or as polar opposites.

Why Edge Computing Matters in Streaming

With so much of today's internet traffic flowing through large data centers—some estimates say that 20% of all of the world's data traffic is concentrated in a few counties in northern Virginia, with 80% of that being IP video traffic—how are we going to move from the centralized delivery model of data centers that are a million or more square feet in a single-floor cavernous facility to the edge, where pro­ces­sing can occur closer to the homes of even the remotest rural areas of the U.S.?