Facebook is set to dismantle its fact-checking program in a move that founder Mark Zuckerberg claims will “restore free expression”, reflecting a significant shift in Big Tech’s approach to content moderation.
Mr Zuckerberg on Tuesday promised that parent company Meta would “get rid of fact-checkers” and replace them with a system of “community notes, similar to X”, echoing the crowdsourcing approach of rival billionaire Elon Musk.
The move comes shortly after the departure of Sir Nick Clegg, Facebook’s former head of global affairs and a key figure in the platform’s moderation policies. Sir Nick, a former UK deputy prime minister and Liberal Democrat leader, had overseen Facebook’s efforts to combat so-called ‘hate speech’. He has been replaced by Joel Kaplan, a former Republican Party strategist, who will now lead Meta’s public policy operations.
In another sign of a shift towards valuing viewpoint diversity at the company, Mr. Zuckerberg has appointed Dana White, chief executive of Ultimate Fighting Championship and a longtime ally of Donald Trump, to Facebook’s board of directors.
“Fact-checkers have just been too politically biased and have destroyed more trust than they have created,” Mr Zuckerberg said. He also promised to “get rid of a bunch of restrictions” on contentious topics like gender and immigration, signalling a broader pushback against corporate diversity and inclusion efforts.
Mr Zuckerberg also reflected on Facebook’s previous approach to disinformation, acknowledging that the company had gone too far in policing content in the aftermath of Donald Trump’s 2016 election victory, amid concerns over Russian interference and fake news. “We’ve reached a point where it’s just too many mistakes and too much censorship,” he said.
The overhaul will see fact-checking by experts replaced with a community-driven system similar to that on X, formerly Twitter. Users will be able to add commentary and fact-checking notes to posts, bypassing traditional expert reviews.
Explaining the new approach, Mr. Kaplan told Fox News: “Instead of going to some so-called expert, it relies on the community and the people on the platform to provide their own commentary to something they’ve read.” He also praised the “new administration” in the United States for its support of free expression.
Currently, Facebook relies on third-party organisations such as Reuters, AFP, and UK-based Full Fact to identify and flag disinformation.
The changes also cast doubt on the future role of Facebook’s Oversight Board, which reviews moderation decisions and includes public figures such as Alan Rusbridger, the former editor of The Guardian. However, the board has welcomed the news, stating: “The Oversight Board welcomes the news that Meta will revise its approach to fact-checking, with the goal of finding a scalable solution to enhance trust, free speech, and user voice on its platforms.” Despite this, the shift away from expert-led moderation raises questions about its relevance in a system increasingly reliant on community-driven fact-checking.
The shift in Facebook’s content moderation policies coincides with the UK’s upcoming implementation of the Online Safety Act, starting in March. This legislation imposes duties on platforms like Facebook to conduct risk assessments for both illegal content, such as terrorism or child sexual exploitation, and harmful content, particularly material that could harm children. These assessments aim to identify, mitigate, and manage risks to user safety, making platforms more accountable for the content they host.
The UK communications regulator, Ofcom, is responsible for issuing detailed codes of practice to guide platforms on addressing these risks and ensuring compliance through oversight and enforcement. Ofcom has extensive enforcement powers, including the ability to fine platforms up to 10% of global annual revenue or £18 million, whichever is higher. For major platforms, this could amount to billions of pounds, making it one of the most severe financial penalties in global content regulation.
However, Facebook’s move to scale back expert-led moderation in favor of community-driven approaches signals a divergence from the Online Safety Act’s regulatory framework, which prioritises centralised accountability and pre-emptive risk management. This shift reflects a broader trend among Big Tech companies like Facebook and Twitter (now X), where the focus is increasingly on promoting user-generated content oversight rather than centralised moderation.
Free speech campaigners, including the Free Speech Union (FSU), have consistently warned that, to avoid regulatory penalties, platforms may adopt an “if in doubt, cut it out” policy, removing lawful but controversial speech to err on the side of caution. Critics argue this could stifle public debate on contentious issues such as women’s sex-based rights, climate change, mass immigration, and populist discourse from both ends of the political spectrum. Additionally, reliance on automated systems to moderate content raises concerns about the over-removal of lawful speech, particularly satire, dissent, or culturally specific expressions.
There’s more on this story here.