Telecom and TV operators need to provide new features more cheaply and faster to increase the pace of innovation. Both software and hardware device virtualization reduce the complexity of embedded software and increase flexibility by simplifying feature evolution.
Continuous innovation has become a fact of life for operators and new development lifecycles are needed where virtualization can play a key role. Also a secured and standardized software environment is required for third-parties to be able to bring innovation to the table, ensuring that operators always work with best-of-breed vendors. This is an essential ingredient of software-driven innovation and is a way to future-proof any investment as virtualized software resources can be swapped much more easily in the future if needed. Virtualization provides fine-grained versioning capabilities through techniques such as software containers so that different versions can be tested and changed almost on-the-fly.
Virtualization also enables the new “DevOps” approach that also aims to enhance agility by linking the processes of software delivery (Dev) and infrastructure change (Ops). It is a significant culture change within organizations as Dev teams and Ops teams need to work hand-in-hand on a project basis. DevOps can be linked to the concept of microservices that operators are already putting in place for their next generation services. Microservices are highly decoupled and focus on a single feature with communication between microservices happening through highly standardized APIs. DevOps and microservices are a new way of building “continuously deployed” systems.
Operators often feel compelled to maintain CPE that Chief Financial Officers (CFOs) deem too costly. The only way to lower these costs can seem to be to move all services to the cloud. But virtualization techniques also bring the scalability and flexibility to deploy services on any of the home CPE, the “Fog” or the Cloud. A Fog computing setup renders services from the edge of the network rather than from centralized resources in a data-centre. Proximity to end-users enables local resource pooling and latency reduction, resulting in superior user experience. Many industry leaders see the Cloud edge (i.e. a street cabinet) as the optimal place to implement Fog. At SoftAtHome we advocate the home as an optional place to implement many virtualized services for the home network or between homes of the same neighbourhood. Indeed, deeper distribution reduces risk in case of outage, since in the event of failure more operator services can at least be partially delivered.
With our virtualization approach, we “cloud-enable” CPE so operators no longer need to make the binary decision of keeping things in the home or the Cloud. The archectural and operational flexibility “Cloud-enablement” of customer-facing features provides is another way for operators to become agile. Virtualization enables operators to bring features to market even if their installed based of boxes varies widely in resources. For example fully rendering a virtualized Graphical User Interface (GUI) in the Cloud is unlikely to add features compared to a high-end STB, but can vastly reduce cost as advanced user interfaces can then be displayed on old or cheap STBs. The most evolved virtualized architectures will in extreme cases, be able to re-deploy processes in real time even across CPE, almost like a load balancer.
Maintaining velocity over time while keeping tight control over costs also requires vendor independence and optimum ecosystem management. Virtualization brings flexibility here too as resources can be swapped around. It brings reactivity, because such swapping can happen quickly. Velocity brought by virtualization shortens Time to Market (TTM) for CPE features, as new ones can be added as software patches more quickly than by deploying CPE updates.
Today, one aim of virtualization is to enable many users and services to share low cost commodity hardware and re-use software components in as many places as possible. In our world of distributed processing across diverse and complex network infrastructures, it makes it possible to monitor and control the whole ecosystem remotely.
You can take a deeper dive into much of this in the White Paper that has been just published here.