Home Newswire GRAMMY’s using Watson AI to generate metadata and improve asset management –...

GRAMMY’s using Watson AI to generate metadata and improve asset management – potential for video is clear

Share on

The Recording Academy and CBS will be making use of IBM’s Watson AI platform to help streamline media asset and production operations at the GRAMMY’s awards this Sunday (January 28). Watson will categorize video from the red carpet, tagging celebrities and topics featured during the pre-show, using a process that IBM calls ‘AI Indexed Video’.

There are non-video uses that demonstrate the potential for AI in the video media asset and workflow market. With ‘Photo Workflow Enhancements’, Watson will enrich raw images with metadata including name, position, facial emotion and – for the benefit of fashion editors and lovers – colour dominance. With ‘Lyrical Analysis’, audio cues will be analysed to generate insights on the emotional tone (e.g. joy, sadness, anger, distrust and fear) of the lyrics in awards-nominated songs. This data will be used to create a Lyrical Analysis Dashboard where fans can visualise insights across songs, categories, and years.

The Recording Academy is integrating IBM’s Watson Media capabilities into the awards show’s digital workflow to process over 5.5 hours of Red Carpet live coverage and more than 100,000 images. IBM is the Official AI Partner of the GRAMMY Awards.

Watson and other AI/machine learning systems have the potential to revolutionise the creation of metadata, both automating it and generating much more detailed information. Face and scene recognition are two examples of how this can be applied to video.


Share on