YouTube is applying the power of artificial intelligence to help identify and remove extremist and dangerous content from the platform. The company introduced machine learning in June to flag “violent extremism content” and then escalate it for human review. It said recently that the system is getting faster. “Over 83% of the videos we removed for violent extremism in the last month were taken down before receiving a single human flag, up 8 percentage points since August.”
YouTube teams have manually reviewed over a million videos to improve this flagging technology by providing large volumes of training examples to the learning ‘machine’.
Google has also added 35 more NGOs (non-governmental organisations), representing 20 different countries, to its ‘Trusted Flagger’ programme. This programme draws on outside experts to advise YouTube on its policies, flag content as being extremist and provide additional insights to help train the identification systems.
The new NGOs include the International Center for the Study of Radicalisation at King’s College London and The Wahid Institute in Indonesia, which is dedicated to promoting religious freedom and tolerance. “Our partner NGOs bring expert knowledge of complex issues like hate speech, radicalisation, and terrorism,” Google notes.
The company has started applying tougher treatment to videos that are not illegal and do not violate its guidelines. “These videos remain on YouTube, but they are behind a warning interstitial, are not recommended or monetised, and do not have key features including comments, suggested videos, and likes,” the company explains. YouTube is also working to help amplify voices on the platform that speak out against hate and extremism.
Meanwhile Google.org has announced a $5 million innovation fund to support technology-driven solutions to counter extremism and to support grassroots initiatives, like community youth projects, to promote resistance to radicalisation.
Google is a great company but it has been given an easy ride until now when it comes to taking responsibility for what people see on its platform. If it can eradicate the bad people from YouTube thanks partly to a tech fix (or get close to it) then it will be a triumph for the company and for machine learning in media. In the meantime, traditional TV can rightly claim that it remains the true guardian of ‘safe’ video.