Home Opinions AI and the dawn of the not-so-dumb pipe

AI and the dawn of the not-so-dumb pipe

Photo credit: iStock/monsitj
Share on

By David Price and Steve Hawley

We concluded our previous article, ‘Thoughts on mitigating data caps’, with a challenge: how can Pay TV operators retain video subscribers when broadband is the only strong card in their hand?  How can they prevent a future that paints them in as just a dumb pipe?  In two words: Artificial Intelligence.  We look at AI applied to three areas: making the delivery process more intelligent, improving the user experience and minimizing non-legitimate consumption.

AI and Video Distribution

The first attempts to make streaming delivery efficient was to cache content locally, in which the first person to request a video initiates a download to the nearest cache. Then the second person to request that same content can access close content rather than having to go to the original source. If no further requests in a predefined time then the content is purged to make room for the next. Simply a first-in, first-out process.

The next step was to add some smarts. As soon as one content item proved popular it was sent to every cache anticipating demand. One well-known example at the time was SkyCache (Cidera), whose claim to fame was the Monica Lewinsky transcript, which was sent simultaneously to thousands of caches within 500 milliseconds of release.

Soon came the problem of multiple video formats, to cater to all the different screens being used to consume content. In as early as 2005, a Discovery executive told David that they had to create 42 different versions of every piece of content ingested into its system.

On June 29 2007, Apple’s iPhone was released, which quickly became a major destination for content streams. To handle the load, adaptive streaming was created, resulting in another multiplier to the number of different versions that needed to be produced for the same streams. But a whole slew of competing adaptive schemes vied for widespread adoption.

DASH to the rescue

Adaptive streaming became a standard in the form of MPEG-DASH, which launched in 2011 and has been widely adopted since then. (Note: David was involved in the founding of the DASH Industry Forum which published implementation guidelines and various interoperability points). DASH was quickly adopted universally as the primary adaptive streaming format. This alleviated somewhat the plethora of formats delivery providers had to work with.

In addition, a major global CDN provider responded to the need for all these different versions by adding more intelligence to the distribution network  by storing a primary version in a mezzanine format for each asset and the first 12 seconds in all the commonly streamed formats, to service the majority of consumption devices. As soon as demand was recognized, idle servers around the world were spun up to transcode the video into the requested format for content subsequent to the first 12 seconds and the resultant stream was spliced in real time to provide a full and continuous stream.

All of the above was the beginning of adding intelligence to the network, but it is a reactive approach.  Going forward, the key will be to use AI to proactively – predictively – replicate and cache content as close as possible to the point of consumption.

Going from reactive to intelligent

We all know of the issues faced by Sling TV, the pioneer in large scale subscription-based streaming. In their early stages, Sling TV was associated with the distribution network’s inability to handle peak loads for popular content (especially NBA finals).  Similar problems were initially faced by DIRECTV NOW and most other popular streaming services.

Popularity of a live streaming event can be very hard to predict with any regional precision. All it can take is local weather event or a local college game to bring wild swings into the viewership of a live stream.  On the other hand, SVOD is relatively easier to predict in that a title’s popularity can be estimated from prior release windows.

So how will content delivery evolve? It clearly will benefit greatly from AI making predictive caching more accurate. Tools that predict the emotional reaction from potential  viewers of content are emerging and being trialed in various parts of the world. Understanding and predicting viewing behavior will be key to predictive caching and heuristic capacity adjustment. This intelligence also enhances the value of an advertising insertion opportunity which brings, in itself, a huge financial dividend.

Now, on to the consumption side of the equation, where AI is being employed in multiple different ways.

AI and consumer usability

One AI focus is to help video providers optimize the user experience.  We’ve all used video apps and Web sites that are difficult to navigate.  Examples include circular references that start by searching for a video, finding the video, and directing the user back to the Guide, rather than simply playing the video.  Another is where it might take four steps to accomplish a task when there might be a way to do it in two.

Many platform providers are beginning to employ AI to discover usability pain points and provide guidance to make these apps easier to use.  At the same time, we should also acknowledge that optimization might be a goal that is secondary to getting ad-impressions, but how many ad impressions will an end user tolerate that get in the way of what the user came for in the first place?

Finally AI has a place in emulating the Human Visual System (HVS). We all know that PSNR, SSIM and others are of little value in estimating Picture Quality (PQ). Now AI is being used to judge PQ and from that, adjust the encoding process to deliver the best possible PQ in a given bandwidth. This enhances the User Experience and lowers delivery congestion (and costs).

Stopping video pirates in their tracks

Another AI focus is in anti-piracy, where it can be used to discover and remedy non-legitimate video re-distribution. Anti-piracy has become a major focus for vendors that are already involved in multiscreen video security, with the hope and expectation that their emerging anti-piracy solutions will help them offset declines in their traditional CAS and DRM revenue streams.

One way to do this is to employ stream-specific watermarking of live events at the client device – as the Movielabs Enhanced Content Protection specification directs – and then utilize automated monitoring to identify instances of streams that are outside of their expected consumer end-points, to identify associated users. Another is to monitor service log-in attempts to identify out-of-profile user behavior. For example, if the same device makes a thousand authentication attempts in the course of five minutes, or if an end user account suddenly has hundreds of devices associated with it – this is likely be a pirate that is redistributing the stream.

Here, an automated process identifies the offending device or user account and notifies the device, or account-user that this anomalous usage has been detected and identified as likely piracy, and to stop. More punitive actions can include revoking keys to the device – effectively shutting it down – or to discontinue distribution of that particular unicast stream altogether and to potentially notify law enforcement.

Interestingly, positive actions can also be taken as well. If a sufficient number of streams are detected in a market region that a video provider is not currently serving, the distributor can obtain those rights and then increase marketing activities to invite consumers in that region to subscribe legitimately. Not only does this plug the leak; it can also improve service and advertising revenue for both the content provider and the distributor.

Like service delivery, anti-piracy involves an ecosystem of individual technology elements that work in concert.  Watermarking, monitoring and analytics alone don’t solve the problem, and professional services are required. AI will make it easier to discover and propose remedies for individual piracy situations.

What next for AI?

There are many more use-cases for Artificial Intelligence beyond video delivery, usability, and piracy, including advertising and intelligent encoding. We’ll start down those roads in future articles. Stay Tuned.

This is an ongoing series of articles by the authors that examines technologies and issues in the video industry.

David Price and Steve Hawley are both jugdes for the international categories of the Connies awards, which this year recognise ‘Excellence in Analytics & AI for TV & Video’. You can see more details about our judges and the categories here. The first deadline for entries is February 8.

Share on