H.267: A Codec for (One Possible) Future
H.267 should be finalized between July and October 2028. If history holds, this means H.267 won’t see meaningful deployment until 2034–2036, long after I hang up my keyboard. Here's a brief description of what the standard is designed to deliver and my free and totally unsolicited advice to the committees and technology developers that will create it.
Main Performance Goals
Before diving into my unsolicited advice, here’s what H.267 is designed to deliver. According to the JVET's July 14, 2024 document, Proposed timeline and requirements for the next-generation video coding standard, H.267 aims to achieve at least a 40% bitrate reduction compared to VVC (Main 10) for 4K and higher resolutions while maintaining similar subjective quality.
The Enhanced Compression Model (ECM) v13 has already demonstrated over 25% bitrate savings in random access configurations, with up to 40% gains for screen content. Subjective evaluations confirm these gains, highlighting strong performance in both expert and naïve viewer assessments.
Figure 1. H.267's compression performance vs. VVC in terms of luma PSNR in the RA configuration
Here are a few other key points about H.267:
- The codec is designed for diverse applications, including mobile streaming, live broadcasting, immersive VR/AR, cloud gaming, and AI-generated content.
- It targets efficient real-time decoding and scalable encoder complexity, supporting resolutions up to 8Kx4K and frame rates up to 240 fps.
- It emphasizes flexible support for stereoscopic 3D, multi-view content, wide color gamut, and high dynamic range.
What Comes Next? A Bitrate Reduction Doesn’t Equal Relevance
If H.267 reaches finalization in 2028, history tells us it won’t be relevant until at least 2034. That timeline alone raises a critical question: Will it even matter by then? The codec’s projected 40% bitrate reduction over VVC sounds impressive on paper, but efficiency alone doesn’t guarantee adoption. We’ve seen this with VVC, which remains largely sidelined due to slow hardware integration, licensing headaches, and power-hungry decoding—issues H.267 is on track to repeat unless it fundamentally rethinks its design priorities.
H.267’s Real Risk: Becoming Obsolete Before It Even Arrives
While H.267 aims to deliver impressive bitrate reductions and efficiency gains, there’s a risk no one wants to say out loud: it could be completely obviated by an entirely different class of codecs before it even sees widespread adoption. The danger isn’t just that H.267 will be slow to deploy—it’s that by the time it's hardware-ready, the world may have moved on.
Companies like Deep Render are already developing AI-native codecs that abandon the legacy of block-based architectures entirely. These solutions may not deliver dramatically better compression ratios than H.267 on paper, but they can be instantly deployed via NPUs already embedded in billions of devices. No need for dedicated hardware decoders, no agonizing wait for chipset refresh cycles. As NPUs become ubiquitous, the frictionless deployment of AI-driven codecs could leapfrog H.267’s theoretical gains with real-world efficiency, scalability, and time-to-market advantages.
So, the real risk isn’t that H.267 will fail technically. It’s that it will succeed technically but become irrelevant practically. When it clears the gauntlet of standardization, licensing, and hardware integration, we might live in a world where codecs are no longer “standards” but dynamic, AI-optimized algorithms updated as easily as a software patch. The question isn’t whether H.267 will work—it’s whether we’ll still need it.
Make Environmental Impact a Clearly Defined Priority
Philippe WETZEL's excellent essay, Challenges and Objectives for a New Video Compression Standard (H.267), highlights the growing challenges of achieving environmental sustainability in video compression. As video now accounts for over 80% of internet energy consumption, the rapid proliferation of encoders—driven by smartphones, social media, IoT, and machine-to-machine (M2M) applications—has shifted the industry’s focus from decoder-centric efficiency to balancing both encoder and decoder energy demands. The complexity of successive standards has led to diminishing returns, where marginal gains in compression efficiency require exponentially more processing power, especially problematic for real-time, low-latency applications like videoconferencing, cloud gaming, and autonomous systems.
Potential solutions lie in optimizing algorithms for NPUs and leveraging the energy efficiency of software-based implementations over dedicated hardware. The JVET document underscores these concerns, emphasizing the need for codec complexity that allows feasible real-time decoding while minimizing power consumption, though it stops short of setting explicit environmental targets. Together, these documents suggest that without deliberate design choices, future codecs risk exacerbating energy demands despite technological advancements.
The Hidden IP Risk: Why Legacy Architectures Come with Legal Baggage
Beyond technical stagnation, there’s another risk that H.267 faces if it clings to traditional block-based architectures: intellectual property (IP) entanglement. Every generation of video codecs—from MPEG-1 to VVC—has layered new patents on top of old ones, creating a tangled web of overlapping claims, fragmented licensing pools, and royalty obligations. This legacy of legal complexity has been a significant barrier to adoption, starting with HEVC and, unfortunately, extending through to VVC.
Now imagine an alternative: an AI-native codec designed from scratch, free from the architectural DNA of traditional codecs. By moving beyond motion vectors, block partitioning, and entropy coding—the pillars of MPEG’s IP ecosystem—AI-driven codecs could offer a clean slate from an IP perspective.
This isn’t just about legal simplicity. A codec with a cleaner IP landscape is easier to license, faster to adopt, and less vulnerable to litigation down the road. It’s also more attractive to large tech companies that are weary of navigating the legal minefield surrounding traditional codecs.
If H.267 stays locked into traditional architectures, it inherits not only technical limitations but also legal and business liabilities. Meanwhile, AI-native codecs could leapfrog ahead—not just with better compression or faster deployment, but with an IP model that’s frictionless compared to the legacy codec quagmire. In this context, abandoning the past isn’t just a technical choice—it’s a strategic imperative.
Is It Time to Abandon Block-Based Architectures Entirely?
All this being the case, here’s the uncomfortable question H.267’s architects should be asking: Are we innovating, or are we just optimizing a 40-year-old idea? Every codec from MPEG-1 to VVC has been an evolution of block-based compression. Sure, we’ve added fancy tools—transform skips, affine motion compensation, advanced intra prediction—but the fundamental approach hasn’t changed. The result? A complexity wall where every additional 1% in compression efficiency demands disproportionate increases in power, silicon real estate, and development time.
So, why not break free? Instead of iterating on block-based coding, what if we leaned fully into AI-driven architectures? Neural network-based codecs (NNVC) have already shown promising early results in experimental settings, offering the potential for radically different approaches to motion prediction, transform coding, and even entropy modeling. Imagine a codec where video is compressed not by guessing pixel redundancies but by understanding content structure through machine learning models optimized for NPUs.
Yes, the technology isn’t production-ready today. But if a block-based H.267 won't be relevant until 2034–2036 anyway, why are we locking ourselves into an architecture already past its prime? If an AI-based H.267 can play on NPUs out of the box, it can avoid the decoder deployment delay that has paralyzed VVC deployments, and play on billions of devices at its launch.
Advice to the Committees (From Someone Who Won’t Be Around to Say ‘I Told You So’)
Since I’ll be long retired by the time H.267 becomes relevant, consider this my parting advice: don’t design a codec for the problems we solved a decade ago. Design it for the platforms, processors, and environmental realities of the next decade—or don’t be surprised when it’s sidelined by simpler, more adaptable alternatives. Here’s what to focus on:
- Mandate NPU and general-purpose hardware compatibility. If H.267 relies on dedicated hardware decoders, it will be DOA. Design with NPUs and flexible accelerators in mind—make them the primary target, not an afterthought.
- Make environmental sustainability an explicit requirement. Video accounts for over 80% of internet energy consumption. Stop treating energy efficiency as a secondary benefit. Build it into the standard’s core requirements for both encoders and decoders.
- Complexity is not a badge of honor. If real-time encoders can’t achieve H.267’s theoretical gains without enterprise-grade silicon, you’ve built a science project, not a standard. Feasibility matters as much as performance.
- Solve the licensing problem before it becomes one. Don’t repeat HEVC’s mistakes. If H.267’s licensing is fragmented, expensive, or opaque, the industry will default to existing codecs or AI-based alternatives—not because they’re better, but because they’re simpler to adopt.
- Ask the hard question: do we even need this? If an AI-driven codec can deliver comparable efficiency gains and be deployed instantly via NPUs, what’s the justification for H.267’s complexity, licensing, and hardware demands? If you can’t answer that clearly, it’s time to rethink the entire approach.
The greatest threat to H.267 isn’t technical failure. It’s irrelevance. Don’t spend the next decade perfecting a standard for a future that will have already moved on.
(Note: I am consulting with Deep Render but, as of this writing, haven't tested its codec. Betting on NPUs isn't a bet on Deep Render; it's a bet on AI-based codecs in general).
Related Articles
This article explores the current state of AI in the streaming encoding, delivery, playback, and monetization ecosystems. By understanding theĀ developments and considering key questions when evaluating AI-powered solutions, streaming professionals can make informed decisions about incorporating AI into their video processing pipelines and prepare for the future of AI-driven video technologies.
29 Jul 2024
At Streaming Media NYC 2024, Tim Siglin interviews Jan Ozer of the Streaming Learning Center. Ozer discusses how the Streaming Learning Center has expanded into a platform offering courses on streaming-related topics to help technical people in the industry become familiar with terms, technologies, best practices, and tools. He also talks about machine learning, the shift towards Neural Programming Units (NPUs), and the legal and financial implications of choosing a codec due to potential patent infringements.
10 Jun 2024
It's been 3-plus years since the MPEG codec explosion that brought us VVC, LCEVC, and EVC. Rather than breathlessly trumpet every single-digit quality improvement or design win, I'll quickly get you up-to-speed on quality, playability, and usage of the most commonly utilized video codecs and then explore new codec-related advancements in business and technology.
04 Apr 2024
Welcome to your codec update for 2023. I'm here to help you decide whether it's time to go all in on AV1, VVC, LCEVC, or EVC or whether it's better to stick with H.264, VP9, and HEVC. Along the way, I'll include 2022-2023 updates to identify significant changes that have occurred since last year.
12 Apr 2023
Twitch Principal Video Specialist Tarek Amara explains which factors publishers should consider when choosing encoders and making codec support decisions in this clip from his Video Engineering Summit presentation at Streaming Media East 2019.
09 Oct 2019