With European elections only two months away, and increasing worries about cyber threats, the European Commission (the Commission) has published recommended anti-disinformation measures for social media and search engines.
Warning that there are “systemic risks online that may impact the integrity of elections”, the Commission has said it will now “guide” Very Large Online Platforms and Search Engines like Facebook, Instagram, Google Search and YouTube through the June elections.
As reported by the Brussels Signal, under the EU-wide Digital Services Act, if the Commission believes big tech companies with over 45 million active users are failing to address online mis- and disinformation (as those terms are defined by the Commission) associated with electoral processes, they risk considerable fines of up to 6 per cent of their global annual turnover. The report continues:
This can mean large platforms face difficult judgment calls between protected political content, such as political satire protected by free speech, and “harmful political disinformation”, which is “crafted with the intent to sway voters and manipulate elections”.
If content constitutes disinformation or an attempt at election interference, then it falls under DSA law, with platforms facing duties under EU law.
In all cases, platforms need to “implement elections-specific risk mitigation measures tailored to each individual electoral period and local context”.
The “mitigation measures” can include promoting official information on electoral processes, media literacy initiatives, and adapting their algorithms to stop content that threatens the integrity of electoral processes from going viral.
As for identified disinformation and foreign information manipulation and interference (FIMI), they will need to carry fact-checking labels “provided by independent fact-checkers and fact-checking teams of independent media organisations.”
Some subjects, however, may be tricky for platforms to address.
Examples here include “forms of racism, or gendered disinformation and gender-based violence”.
Platforms also could face difficult judgment calls with “public incitement to violence and hatred to the extent that such illegal content may inhibit or silence voices in the democratic debate, in particular those representing vulnerable groups or minorities”.
Tech companies will need to adopt specific measures, such as a response mechanism during elections, to reduce the impact of incidents like these on the outcome of elections.
The Commission says it expects its new guidelines to be formally adopted in April.
Worth reading in full.