Meta crowdsources content moderation – what are the legal implications?

Question of what constitutes free speech must be answered by regulators

Image:
Questions of what constitute free speech and censorship must be answered by regulators

Meta’s latest policy changes on content moderation and fact checking will have far reaching implications.

The implications of yesterday’s announcement by Mark Zuckerberg of the reintroduction of political content onto Meta platforms, and the crowdsourcing of content moderation to users in the form of community notes along with the removal of fact checkers from the business, are still being reckoned with.

The scale of Meta’s global reach means that more voters globally see Meta platforms - particularly Facebook – than any other. X might be noisy, but it has a fifth as many active users as Facebook.

Whilst Zuckerberg’s video message did reiterate his commitment to keeping content relating to terrorism, drugs or child sexual abuse off his platform - what Meta describes as illegal and high severity violations of its policies - lawyers are already commenting that it is hard to see how the new policies will stay within the remit of the Online Safety Act in the UK.

Iona Silverman, partner at Freeths said:

“The justification for the removal of fact checkers seems to remove any bias or inhibition of free speech. However, Mark Zuckerberg does admit that that changes to the way Meta filters content will mean “we’re going to catch less bad stuff”. This appears to fly in the face of the Online Safety Act which requires tech companies to prevent UK users, particularly children, from accessing harmful content.

“Ofcom has published draft guidance on how to protect children and will require social media platforms to risk assess for harms to children from Spring of this year. The Online Safety Act was passed with the best of intentions: to protect people. However, it seems doomed to fail unless the regulators can move more quickly.”

Mark Jones, partner at Payne Hicks Beach commented on the difficulty of balancing the priorities of free speech and the rights of users to not be subjected to misinformation, disinformation and other harmful content.

"This decision re-ignites the debate of free speech versus moderation of potentially harmful content,” he said.

"Is delegating moderation of content to other users the best way of moderating content and creating a safe online space? Surely it simply increases the amount of misinformation and disinformation online. Delegating fact-checking to other users, who may not know the truth behind a story or post, increases the risks of misleading, harmful and just plain wrong content being available online.”

Concern about misinformation is shared by Russ Shaw, founder of Global Tech Advocates and Tech London Advocates who commented:

“Meta’s shift to a “community notes” system is an abdication of responsibility for managing misinformation on its platforms. Of course, free speech is a right that must be protected - but it should not come at the expense of enabling misinformation. While framed as a return to free expression, this move from Meta could embolden an environment where false information spreads unchecked, deterring users and ultimately undermining trust.

“Social media platforms have a duty to provide safe and secure spaces for discussion and interaction. By stepping back from fact-checking, Meta jeopardises its credibility to its users and invites increased regulatory scrutiny in the future.”

MAGA and Big Tech

The regulatory scrutiny to which Shaw refers isn’t going to come from the US. In fact, the jettisoning of fact checking and offloading of content moderation to users themselves looks like the next phase of the ongoing fusion of the MAGA strain of American conservatism with Big Tech.

Big Tech is certainly helping to fund it. Meta donated $1 million to Trump’s inauguration and was joined by Google, Amazon, Tim Cook and Sam Altman in doing so.

The sheer scale of Meta’s reach makes this policy change more worrying than it would be on a smaller platform, although X has proven a perfect testbed of where crowdsourcing content moderation takes a platform. X isn’t a digital “village square.” It’s a Hobbesian toxic swamp and is continuing to lose users to moderated platforms like BlueSky.

Whether X’s algorithmic boosting of the most inflammatory rhetoric (much of it by its owner) and suppression of more nuanced, factual content constitutes free speech is rapidly becoming the question of the ages.

It’s a question that regulators in the EU and UK were already wrestling with, and Zuckerberg’s capitulation makes it more urgent because Zuckerberg has chosen to present fact checking and censorship as one and the same thing, claiming that “legacy media…has pushed to censor more and more” and said that his own company’s previous content moderation policies resulted in “too much censorship” and had “gone too far”.

Regulators in the EU and Ofcom in the UK urgently need to establish exactly what “too much censorship” is and what freedom of expression really means – as well as what we’re collectively willing to sacrifice for it.