When it comes to engaging customers, there’s an old adage in retail: it’s about location, location, location. This adage also holds true in the quest to offer quality video programming to consumers anywhere, anytime, on any device. Video providers are also retailers – it’s just that the venue is different.
When it comes to providing a quality video experience, we all know the problem. Quality is good when the consumer device resides in a fixed location over a fixed-line broadband connection that provides a relatively constant service. It’s another thing when bandwidth fluctuates, when the user’s location makes connectivity marginal, or when the provider is serving over an older mobile access technology such as 3G or 4G.
Mitigating data caps, revisited
On a more practical level, many providers assess extra charges for bits when they hit a certain consumption threshold. We covered some of that ground in a past article (Mitigating data caps).
A variety of techniques has emerged to help level the video access playing field. A common technique is to reduce the number of bits it takes to deliver a video service, so that a consumer that’s not in the ideal location can still receive at least a reasonable experience.
An important one is Content Aware Encoding (CAE).
There’s no single way to do it
Content Aware Encoding – sometimes also called context aware encoding – is a process that examines video content during or after encoding, so that the highest quality version of the video content can be delivered to the consumer at the lowest practical bit rate.
Rather than there being a single approach to CAE, there are at least four:
- Advanced video processing algorithms that reside in a video encoder or transcoder,
- Encoder-side or in-network video monitoring and evaluation using advanced algorithms,
- Client-side QoE monitoring and analysis, with a feedback loop back to the encoder, and,
- External processes that assist the Player in presenting the “best” bit-rate version to the consumer.
All of these approaches evaluate video content using algorithms that approximate the characteristics of the human visual system (HVS) and make recommendations that have the effect of preserving or improving video quality while reducing bit-rate.
Is there a ‘best’ way?
Determining the best approach depends on the video provider’s goals. If the goal is to produce video with the smallest volume of bits and lowest impact on infrastructure, then the encoder-centric approaches taken by Harmonic, Beamr and V-Nova to adjust video content production within the encoder itself, are worth a close look. They don’t require external monitoring, and the video output is packaging-agnostic.
Among these, there’s a difference. The V-Nova solution produces a metadata stream that is evaluated by a client-side component to enhance the video that is presented to the consumer. If the client component is not present, only the base stream is decoded. The Beamr CABR and Harmonic EyeQ solutions do not have this metadata component. Instead, the video reduction is done within the encoder so a client-side process (and the associated player integration and testing cost) is not necessary. Also, Beamr CABR supports all resolutions with high frame-rate up to 4kp30 for live video and 4kp120 for VOD.
There also are approaches that leverage approximations of the HVS without encoder-resident components. MediaMelon implements a cloud-resident component that analyzes encoder output and produces a metadata stream that guides the player in selecting the optimal version of the stream. It also gathers data from the client-device that can be used for Quality of Experience analytics.
Using an end-to-end approach, SSIMWAVE can evaluate video quality at the encoder, transcoder, in the delivery network and at the consumer end point. Over a period of years, the inventors of SSIM made advances in the algorithm and use it identify impairments in video quality and relay feedback that enables video providers to make adjustments.
Another consideration is cost, including that of the license for HEVC encoding. Many video providers are sensitive to this and all of these solutions can all produce enhanced results while delivering them as H.264.
The results so far
Of course, the best way to gauge “best” is to look at the results. All of these approaches boast significant bit-rate savings while delivering optimised video streams. Bandwidth savings claims by the vendors cited in this article range between 20% and 50%; with 20% being characterised as conservative.
This not only postpones that moment when consumers hit their service data caps, but also benefits video providers by reducing the cost of delivery – especially if they use cloud-based delivery where the cloud provider is likely to charge by the volume of video delivered.
Why CAE is important
Content Aware Encoding has become a proven way to reduce costs while improving the overall quality of video that reaches the consumer, with a better likelihood that the video will reach the location, location, location; wherever the consumer happens to be. The good news is that there are multiple approaches to choose from and because none of them are exactly alike, video providers can make the choice that is optimal for their own situations.