- News & Analysis
- Video & Audio
- White Papers
- Industry Reports
Cisco has been demonstrating a proof-of-concept for how the management and operation of set-top boxes could be made easier by moving some television functionality to the cloud and by using the same HTML5 code to render the same UI across hybrid STBs and connected CE devices. The SOLAR proof-of-concept shows how programme guide information can be delivered from AWS servers over broadband and how the cloud servers can be provisioned to cope with peaks in demand.
During a demonstration last month, Cisco showed how you can write HTML5 application code for the User Interface that is then harnessed for multiple devices including Smart TVs, tablets and set-top boxes in a write once, deploy everywhere scenario. In the demonstration the company used a reference STB platform (using the Broadcom BCM7425 gateway SoC), a Samsung Smart TV and an Apple iPad and all of them used exactly the same programming code for the User Interface, running on an HTML5 browser in the client devices. All the metadata and code needed to recreate the UI in the receive devices was hosted and served from AWS (Amazon Web Services) servers in Ireland.
“We wanted to prove that you can build a responsive UI that you can write once and use anywhere,” says Matthew Spencer, Technical Lead at Cisco Service Provider Video Group. He points out that there was no prior relationship with Samsung to ensure the smooth display of the UI on its Smart TVs. “And the UI on the iPad shows all the wonderful things that you expect from that device but it uses the same HTML application that runs on the STB.”
Currently the UI is not optimized according to the size of the screen but that is the next step. Cisco is working to ensure the UI scales itself, dynamically changing to match the screen size (and therefore likely viewing distance) but without any changes to the HTML5 code.
Cisco has been more bullish about the role of the cloud in Pay TV than most vendors but the company still believes in client-side rendering of the User Interface, as opposed to rendering the UI in the network and sending it to the STB as the graphical equivalent of a video channel.
Spencer argues that local rendering is efficient because a lot of the visual UI content remains the same even when the customer is requesting changes. So after the content of the UI has been requested the first time, it can be cached locally, minimizing the data that is requested next time. “It’s about network bandwidth optimization,” he says.
As the rendering is local, the responsiveness of the UI depends in part on the ability of the data servers to deliver the code and metadata quickly (over broadband) and cope with whatever concurrent demands are placed upon it. So the second part of SOLAR is about using the computing power of the cloud to predict peak data demands, like when viewers stampede towards the programme guide or ‘now-and-next’ parts of the UI at the top of the hour.
This is especially important for the living room TV, where linear still dominates. A key point about SOLAR is that the EPG data is delivered from the cloud (e.g. AWS) and over broadband and that includes to the STB/media gateway. So this concept requires a hybrid receive device with IP input for the data, although the video can be delivered as traditional cable, satellite and IPTV streams.
To manage server capacity, the SOLAR proof-of-concept uses anonymized viewing and EPG usage data combined with predictive analytics. Cisco uses an ‘elastic scalable compute platform’ to analyze viewing trends in real-time, although because of the time needed to compute what everyone is doing, the results are 1-2 minutes behind live. But these results are used to predict what happens next.
Spencer says it takes 60-90 seconds to provision new server capacity on AWS (or ‘spin up a new instance’) so the idea is to have that extra capacity ready and waiting, without over-provisioning, so that this cloud-delivered UI data gives the same consumer experience, in terms of responsiveness, as traditional methods. “We don’t want reactive provisioning – we want to be able to guess the load on the servers,” Spencer explains. He says TV usage is actually quite predictable on a day-to-day basis.
In the absence of live viewing inputs for the demonstration, Cisco has used representative audience samples and amplified their behaviour so it is realistic for 100,000 people across different channels. The company input those statistics into its scalable compute platform. SOLAR is therefore a demonstration of how the cloud can be used for the data crunching that makes the cloud UI delivery more efficient, as well as the delivery of the UI elements themselves.
Cisco is convinced that the cloud has distinct, albeit it currently limited roles to play in the delivery of TV services and has spent much of this year outlining its vision of a hybrid cloud-device world. SOLAR is a good example of this hybrid philosophy, with metadata stored in the cloud but rendered on the device. We wrote a report last summer about the role of the cloud for Pay TV and you can read that here.
That report also outlined the benefits of A/B testing when making more use of the cloud. Spencer notes the benefits of using the same HTML5 code for the UI across all screens. “You can make a change in the server and the clients react to that change,” he points out.
Because it becomes much easier to experiment, a service provider could make a minor change, like making the pricing of VOD movies more prominent, and then test that on a sub-set of the device population to see the response. “You then use analytics to see whether there was an impact in take-up.”
The LUNAR project: What you can do with improved metadata
Separately to this cloud proof-of-concept, Cisco is working on ways to extract more metadata information from television programmes (and ensuring the new approaches are automated). Besides using on-screen graphics to flag up key words or events, the company is looking at the role of face detection and even experimenting with face recognition.
Face detection could be useful in setting the context. If there are two faces it could suggest, perhaps backed by other information, that an interview is being shown. When it comes to graphics, it could mean detecting the word ‘Mexico’ on a news bulletin but equally it could mean recognizing a red symbol as the sign in a sports programme that a player has just been sent off (with a red card).
The enriched metadata work goes by the internal name LUNAR at Cisco. Part of the development work is to see what you can do with the information you extract. The Mexico graphic on TV could prompt more information about the country, in the most simple use case. But far more exciting is what Spencer refers to as a ‘Third eye’ application that can let us know, as viewers, when things are happening on channels we are not currently watching.
So an app could use the rich metadata extraction to tell us when a shopping channel has started to talk about a new product, or when there is a red card in the football. Spencer speculates how the ‘third eye’ could then be integrated with other TV functions. So depending on user preferences, a red card could prompt the TV to change channels or use network PVR to rewind the action to play you the incident, or you could get an alert on your smartphone.