The Government’s controversial disinformation team is developing a secretive AI programme to trawl through social media looking for “concerning” posts it deems problematic so it can take “action”, writes Sarah Knapton for The Telegraph. Here’s an extract:
Records show the Department for Science, Innovation and Technology (DSIT) recently awarded a £2.3 million contract to Faculty AI to build monitoring software which can search for “foreign interference”, detect deepfakes and “analyse social media narratives”.
The platform is part of the Counter Disinformation Unit (CDU) which was set up in 2019 and sparked widespread criticism for amassing files on journalists, academics and MPs who challenged the Government’s narrative during the pandemic.
The unit, which has since been rebranded the National Security Online Information Team (NSOIT), has links to the intelligence agencies, which has allowed it to avoid public scrutiny.
DSIT said the new AI tool, called the Counter Disinformation Data Platform (CDDP), is looking solely for posts “which pose a threat to national security and public safety risk”.
The current focus of beta testing is the influence of foreign states during elections.
However, heavily redacted documents obtained by Big Brother Watch through Freedom of Information (FoI) requests show that the Government is reserving the right to also use the platform for other issues.
An executive summary for the project states: “While the CDDP has a current national security focus the tool has the ability to be pivoted to focus on any priority area.”
Since 2021, contracts show the Government has spent more than £5.3 million on developing the CDDP and other disinformation projects including “detecting coronavirus disinformation” and “analysing climate related mis/disinformation on social media”.
FoI documents reveal counter-disinformation teams are concerned about “anti-vaxx rhetoric” and have taken an interest in social media posts “criticising Covid-19 vaccines”.
The teams are also looking into those posting about cancer treatments, mask wearing, and the 5G phone network.
A recent report on the CDDP – disclosed via FoI – shows the platform would be used by analysts to “find the most concerning posts” so they can be reported to “policymakers and ops teams on what may require action”.
DSIT said that once content was flagged, officials would refer posts back to major platforms who could decide what action to take about the content.
The Government said that it respected freedom of expression and would only monitor “themes and trends”, not individuals.
However past Subject Access Requests have revealed that the CDU and their contractors produced reports on mainstream commentators and experts for criticising government policy.
Dr Alex de Figueiredo, of the London School of Hygiene and Tropical Medicine, was identified as a potential source of misinformation for querying whether all children needed to be vaccinated against Covid-19.
The activities of Prof Carl Heneghan, the Oxford epidemiologist who advised Boris Johnson, were also monitored by the unit, as well as Molly Kingsley, who set up a campaign to keep schools open during the pandemic.
It comes after JD Vance, the US vice-president, launched a scathing attack on the British Government and its European counterparts at the Munich Security Conference last week, warning that “basic liberties” such as free speech, were under threat.
Lord Young of the Free Speech Union said: “To scale up the British arm of the censorship-industrial complex at a time when it’s being dismantled on the other side of the Atlantic is politically unwise, to put it mildly.
“It’s particularly tin-eared given that the social media platforms that will be targeted by this new robo-censor are all American-owned.
“To the Trump-Vance administration this will look like another attempt to ‘Kill Musk’s Twitter’, the self-professed agenda of a pro-censorship lobby group founded by Keir Starmer’s chief of staff.”
There’s more on this story here.
The government’s AI-driven disinformation tool closely mirrors a proposal from an emergency meeting convened by the pro-censorship Centre for Countering Digital Hate (CCDH) following the Southport riots. That meeting, attended by DSIT, the Home Office, Ofcom, and the Counter-Terrorism Internet Referral Unit, explored ways to expand state control over online content.
One of the key proposals was amending the Online Safety Act to give Ofcom “emergency response” powers to tackle so-called misinformation deemed a risk to “national security” or “public safety.”
What’s now being proposed by DSIT is a similar idea — only this time, there’s no need for Parliament. Why bother with the scrutiny of a legislative amendment when a shadowy, non-statutory agency can simply expand its own remit behind closed doors?
The FSU has covered the CCDH’s involvement in this proposal in more detail here.