The early days of the Internet were often dubbed a ‘wild west’ to portray a frontier where rules were bent, broken or non-existent and content was often unregulated, harmful and illegal. Those concerns have bubbled under the surface and recently erupted as part of the fallout from the alleged illicit mining of Facebook’s user data to fuel propaganda. This has led to louder calls for digital media to be made as accountable for the dissemination of content as traditional publishers are.
Last month, for example, a British parliamentary committee told the UK government it should hold technology companies responsible and liable for “harmful and illegal content on their platforms,” and that misinformation and “fake news” is threatening democracy. Facebook, the subject of most ire, has long maintained that it is a tech platform, not a media company, yet defended itself against external regulation in a U.S court in March by arguing that it makes editorial decisions, which are protected by the first amendment.
Mark Zuckerberg’s later testimonies indicate his company is ready to yield to change – if only as a necessity to shore up its brand reputation. “I think it inevitable that there will need to be some regulation,” he told the U.S House of Representatives in April. “We need to now take a more active view in policing the ecosystem…At the end of the day, this is going to be something where people will measure us by our results.”
Governments around the world are considering options. In Europe this is putting strain on the legal framework established in the E-Commerce Directive. The UK government has committed to bring forward online safety legislation as part of its Digital Charter. UK regulator Ofcom has wavered on the topic, wary of regulation’s fuzzy boundary with censorship on the one hand, while on the other protesting that digital giants ought to be doing more to ensure their content can be trusted.
Unsurprisingly, media publishers are also weighing in. It is in their interests to trim the advertising power and reach of large digital rivals. “A consensus is growing that further intervention is needed to address platforms’ role in governing online content, given its importance to the public interest in a host of areas,” stated the report ‘Keeping Consumers Safe Online’ published in July and funded by Sky.
Calling it “the single biggest gap in Internet regulation,” the report recommends a Code of Practice, an oversight body plus incentives and sanctions. Conscious of Sky’s own digital independence, the report caveats, “oversight needs to be cautious, and limited in statute, to mitigate potential risks to openness, innovation, competition and free speech.”
The growing clamour for intervention masks the efforts digital intermediaries are already doing to regulate the content uploaded to their networks. Google reports that its automated technology detected about 80% of the 8.28 million videos removed from YouTube in the last quarter of 2017 (working alongside thousands of human moderators).
Facebook acted against a record 1.9 million pieces of content on its platform in the first quarter of 2018, detected as fake accounts and fake news by its AI system. Even the wild west was not lawless. Its citizens acted to protect and uphold societal values. The weight of public opinion alone, without state intervention, may be sufficient for digital platforms to govern themselves.
This piece by Ray Snoddy in our sister publication, Mediatel Newsline, makes the very good point that in the case of Facebook, the sheer cost of making the platform a “respectable citizen” will help level the playing field and give breathing space to traditional media companies it competes with. He adds that “the evidence is considerable that Facebook is at great expense finally taking responsibility for what appears on its platform and those who are trying to manipulate, misuse and mislead.”