There is one notable, and possibly overwhelming, trend that has emerged from IBC 2014 this year and that is virtualization. For those of us primarily interested in the headend-to-end-device part of the delivery chain this story has been emerging throughout 2014, with announcements from encoding and transcoding vendors starting in earnest this spring. But suddenly at IBC it seems that there is nothing that cannot and will not be virtualized, with capture, packaging, content security, origin serving, ad insertion, captioning and even subtitling and playout included in the list, with multiplexing to be included before too long.
Talking to vendors it is clear that the completely virtualised headend is within touching distance, although the migration to such a headend, which effectively resides almost entirely in a data centre, is going to take time. But the impression we have been given is that this transition is going to be measured in years, not decades. And that migration has already begun. Some of the big drivers for virtualization are simplification of the operations required to deliver content to all platforms and screens, increased agility and to a lesser extent, reduced operating costs.
There is an extraordinary degree of consensus that this is the future of the television industry. As an example, encoding and transcoding companies that grew up championing software-based video processing on generic hardware and those who built their businesses with dedicated hardware appliances (designed specifically to optimize performance of compression software) are all pointing in the same direction: towards software that is abstracted onto ever-more powerful generic hardware in data centre environments, making use of the economies of scale and the continuing increases in compute power that the servers there offer. In the case of encoding and transcoding â€” and we would assume for all other headend functions â€” one of the big challenges is going to be managing hybrid environments where some of the processing is performed on dedicated appliances in the traditional headend, some is virtualised on-premise (including in a private cloud) and some is in the public cloud. This is an inevitability in what will, for some time, be a hybrid environment.
The conversation is already turning to how you manage the abstracted compute resources, like how you get â€˜visibilityâ€™ into multiple clouds, potentially spanning multiple cloud service providers or including a private cloud within the operator facilities, so you know what resources are available and can allocate tasks to them, like spinning-up â€˜instancesâ€™ of a function before spinning them down again. The cloud integration and management is going to be a crucial part of what a vendor gives to a platform operator as part of their solution.
Virtualization and the orchestration of different virtualised resources, which effectively takes you into â€˜cloudâ€™ territory, is another example of convergence, with the television world taking on concepts already proven in IT. Like the movement towards IP, there is an inevitability about it. The speed at which â€˜virtualizationâ€™ has become a central message for so many headend vendors suggests that this is not just big but it is also imminent. Some platform operators are already using virtualized processing as we have reported throughout this year.
Look out for our post-IBC coverage, starting later this week, when we will be analyzing what the major headend vendors have been saying on this subject.