The way audiences engage with video content has changed radically over recent years, moving from passive to more choice-based consumption models. Consumers are creating their own personalised viewing experiences, drawing from ever more diverse sources and platforms. Content providers have responded by delivering content across multiple platforms, from linear broadcast to online and mobile. However, the device or platform has become a means to an end; what consumers really care about is accessing the story they want to engage with on the screen of choice.
While audiences no longer take a linear approach to engaging with content, the majority of broadcasters and media organisations still think of and produce content in a linear, programme-centric fashion, requiring a high-degree of human intervention. Story-centric media production aims to shift the focus to creative decisions that make great TV and video and simplify production for multiple devices and formats.
In order to meet the huge consumer appetite for content, large broadcasters can easily produce over 50,000 video files an hour, with the final edited content being distributed across a hugely diverse range of OTT and mobile platforms. At the same time downward pressure on production budgets adds another dimension to an already complex operation.
In addition to more content across more platforms, consumers increasingly expect specific versions of the content that can be accessed via different devices – all on production budgets that are already stretched thin. Story-centric production has been seen as a potential solution to this conundrum. However, it has so far failed to deliver on its promise. The biggest obstacle to progress is poor metadata. Without rich metadata the assets stored in centralised MAM systems are difficult to access and utilise. In the current production workflow model, at ingested, the only metadata usually attached to a file is the title. This makes searching through assets a time consuming and often fruitless task.
This is changing. AI-based tools now enable enriched metadata to be automatically generated and the creation of new workflows that allow metadata to be handled correctly by MAM solutions. We believe that this story-centric approach is a key enabler of the ‘smart studio’, changing the way that video is produced, distributed and consumed and delivering a huge increase in efficiency. Highly granular metadata combined with highly accurate voice and object recognition, enables video clips to be indexed, located and then shared with single frame precision. This automation provides the ability to share with different audiences as well as sell assets or share them with business partners and customers.
The ‘smart studio’ will enable media organisations to maximise the value of each individual piece of content, using built-in intelligence to get the right content to the right audience, or to the right user. The combination of AI, real-time search engine and scriptable production engine technology makes it possible to create a true story centric workflow which will allow thumbnails of assets to be accessed as the script is being written, vastly speeding the creative process, with re-editing handled in the scripting application.
The integration of TVU MediaMind with Media Asset Management systems puts metadata at the heart of video workflows. By creating one centralized search engine for all raw materials, live or recorded, that feed all distribution channels, the barriers to efficiency erected by multiple content production workflows are removed.
In today’s approach, producers hand-craft the multiple versions required for a TV show. In the story-centric ‘smart studio’, AI engines will deliver a step-change in efficiency. By ensuring that existing videos are instantly searchable down to the exact frame, and even made available during the scripting process. AI tools will also automate the production of multiple versions and optimise content delivery to each target audience segment.
Much of the technology needed to transition to the ‘smart studio’ already available today and when combined with cloud-based model complete with voice and object recognition, has the potential to revolutionize the way video is produced, distributed and consumed.