Newly released documents have confirmed that Big Tech platforms limited the spread of millions of posts during the European Parliament elections in June (Brussels Signal).
Reports published on September 24 under the European Union’s voluntary anti-disinformation code showed that Meta, Google, and TikTok took measures to limit the spread of content deemed by fact-checkers to contain “disinformation.”
Meta explicitly stated that it deboosted tens of millions of posts.
“Between 01/01/2024 to 30/06/2024, over 150,000 distinct fact-checking articles on Facebook in the EU were used to both label and reduce the virality of over 30 million pieces of content in the EU,” the company said.
“As for Instagram, over 39,000 distinct articles in the EU were used to both label and reduce the virality of over 990,000 pieces of content in the EU. These numbers demonstrate the powers of our tools to scale the work of independent fact-checkers.”
Although exact numbers varied from country to country, the company estimated that its limitations prevented around half of the people aiming to share such posts from doing so.
While it added notifications to “outdated articles” and links, such restrictions were not applied to official government sources of information, or organisations deemed to be “recognised global health organisations” by Meta.
It was claimed that the decision to omit such sources from moderation policies was to ensure it did not “slow the spread of credible information, especially in the health space”.
Chinese-owned platform TikTok also admitted to restricting the spread of posts deemed undesirable, although it did not specify the exact number.
“To limit the spread of potentially misleading information, the video will become ineligible for recommendation in the For You feed,” it wrote in its report.
“The video’s creator is also notified that their video was flagged as unsubstantiated content and is provided additional information about why the warning label has been added to their content. Again, this is to raise the creator’s awareness about the credibility of the content that they have shared.”
Even when it considered the posts to be in line with its terms of service, Google said it automatically deboosted videos on YouTube that consisted of “low quality” information.
The company wrote:“YouTube has built systems to ensure that its ranking and recommendations surface high quality content to curb the spread of harmful misinformation and ‘borderline’ content — content that comes close to, but does not quite violate YouTube’s Community Guidelines.”
“To determine borderline content, external evaluators located around the world look at whether content is inaccurate, misleading or deceptive; insensitive or intolerant; harmful or with the potential to cause harm.
“This input trains YouTube systems to automatically identify this type of content,” it added.
Speaking to Brussels Signal, senior Patriots for Europe MEP Tom Vandendriessche said the documets proved a “systematic effort to suppress dissenting voices, particularly those aligned with the nationalist and conservative movement”.
“The reality is that these platforms have immense power to shape public opinion by deciding what gets seen and what gets buried. During the European Elections, millions of posts were censored, but the real question is: who gets to decide what qualifies as “misinformation”? It’s certainly not the public,” the Flemish politician said.
Vandendriessche went on to discuss his Vlaams Belang party in Belgium which has had numerous run-ins with tech censorship. The MEP has even won a court case against Meta over its decision to artificially reduce to reach of his Facebook page.
“My own experience with Facebook, where a court ruled that the platform had unlawfully imposed a shadowban on my page, is just one example of how Big Tech can influence political outcomes by controlling the flow of information,” he said.
“It’s not about public safety or transparency — it’s about controlling the narrative. The EU, in collaboration with Big Tech, is ensuring that their version of the truth is the only one allowed to circulate, a clear violation of the principles of free speech.
“We are witnessing the rise of what can only be described as techno-communism, where the control of information serves to entrench the status quo,” he added.
Reducing the spread of certain posts was not the only way Big Tech companies sought to influence the flow of information related to the European Parliament elections.
During the elections, as well as the country’s earlier family referenda, TikTok said it directed thousands of users in Ireland towards fact-checks written by controversial outlet TheJournal.ie.
TheJournal.ie, which has previously received EU funding, has previously been criticised for its “fact-checks”, described by critics as “dubious” and “almost universally… ideologically motivated, partisan nonsense“.
Google also launched several projects directed at “prebunking” so-called “disinformation”.
Targeting users in France, Germany, Italy, Belgium, and Poland, the company cooperated with third-party organisations attacking methods allegedly used to “advance disinformation”.
It also released a number of videos that attempted to challenge the ways “misinformation” about migration was spread.
Reflecting on the release of the information, the EU’s competition commissioner Margrethe Vestager and transparency tsar Věra Jourová were positive, arguing that they demonstrated how Brussels had forced Big Tech to take on disinformation.
“The Digital Services Act and the specific measures of the Code constitute a robust framework to protect the integrity of elections,” Vestager said.
“This creates a digital environment where actual enforceable standards ensure that Europeans are better protected against the threat of disinformation.”
Worth reading in full.