UK minister defends government’s rebranded Counter Disinformation Unit

The UK government’s shadowy Counter Disinformation Unit may recently have been rebranded as the National Security Online Information Team, but it continues to face criticisms regarding its targeting of lawful online dissent, and the opacity of its operations.

The UK government’s shadowy Counter Disinformation Unit (CDU) may recently have been rebranded as the National Security Online Information Team (NSOIT), but it continues to face criticisms regarding its targeting of lawful online dissent, and the opacity of its operations, reports Civil Service World.

The CDU was first established by government ministers in 2019 as a means to combat foreign interference in European elections. During the Covid lockdowns, however, the secretive unit’s purview expanded to include monitoring social media posts by members of the British public for possible misinformation, and flagging content considered ‘problematic’ to social media companies for censorship.

The government has since claimed that CDU/NSOIT can’t mandate that platforms remove content, which is true. But what’s also true is that the government enjoys a special relationship with several social networks by dint of its “trusted flagger status”, something that would inevitably give officials extra weight when flagging content for review.

In the weeks leading up to last year’s rebrand, the CDU was the subject of controversy surrounding reports that it had monitored and flagged for deletion posts – including from senior Conservative, Labour and Green politicians – that were not misinformation, but merely critical of government policy.

Ministers have repeatedly poured cold water on such claims – and are still doing so.

In response to questions in parliament last week regarding the unit’s attempts to prevent legitimate criticism of the government by MPs, journalists and academics, Jonathan Berry – a junior minister in the Department for Science, Innovation and Technology, which now houses NSOIT – said: “I can confirm not only that it is not the role of NSOIT or the CDU to go after any individuals, regardless of their political belief, but that it never has been.”

Pushed as to whether NSOIT has a policy to prohibit it from flagging lawful domestic speech for terms of service violations to social media companies, Berry – whose official title is Viscount Camrose – replied: “I can confirm not only that it is not the role of NSOIT or the CDU to go after any individuals, regardless of their political belief, but that it never has been. NSOIT looks for large-scale attempts to pollute the information environment, generally as a result of threats from foreign states.

He added: “I am happy to say in front of the House that the idea that its purpose is also to go after, in some ways, those who disagree politically with the Government is categorically false.”

Berry would not be drawn on exactly who staffs the unit, but did respond to a query about its composition by stating that it “comprises civil servants who sit within DSIT, and it occasionally makes use of external consulting services”.

One external consulting services Mr Berry may have been referring to is AI-based internet monitoring firm, Logically.

Logically has been paid more than £1.2 million of taxpayers’ money by the Department for Culture, Media and Sport (DCMS), which previously housed the CDU, to monitor and analyse social media posts.

In its promotional literature, the company boasts that it does so by “ingest[ing] material from over 300,000 media sources and all public posts on major social media platforms”, before using AI to identify those that are potentially problematic.

During lockdown, its regular reports for the CDU were entitled “Covid-19 Mis/Disinformation Platform Terms of Service”, which might suggest the aim was to target posts that breached the terms of service of social media platforms. In fact, the company also logged legitimate posts by respected figures including politicians, academics and scientists questioning lockdown or arguing against the mass vaccination of children against Covid.

One such report repeatedly featured TalkRadio journalist Julia Hartley-Brewer. Among her lockdown-sceptical tweets reported to the government was one in January 2021 sharing a clip from her Talkradio show alongside which she wrote: “Another personal experience of the damage lockdown causes from a @talkRADIO listener. Her fiancé’s business is closed down, her father’s cancer treatment cancelled, and her grandma is scared to even leave her home. This lockdown is a national tragedy way beyond Covid deaths.”

Also highlighted in a CDU report from May 2020 is a Telegraph article by Conservative MP David Davis headlined: “Is the chilling truth that the decision to impose lockdown was based on crude mathematical guesswork?” Davis was described by Logically as “critical of the Government, with the majority of comments criticising Imperial College and blaming [Prof Neil Ferguson] personally for lockdown”.

Elsewhere, a post by Dr De Figueiredo, the London School of Hygiene & Tropical Medicine researcher who also works for the Vaccine Confidence Project, was flagged. He wrote: “People who think we should be mass vaccinating children against Covid-19 poorly understand at least one of the following: (a) risk, especially absolute risk (b) ethics (c) natural immunity (d) vaccine confidence (e) long Covid.” When Dr De Figueiredo made the comment, the Joint Committee on Vaccines and Immunisation had opted not to recommend mass vaccinations for children.

Comments made by Silkie Carlo, the director of Big Brother Watch, on Talk TV at the end of 2021, objecting to vaccine passports and branding proposals as “a vision of checkpoint Britain” were also logged by Logically as problematic.

As to why these posts were considered “problematic” and in need of flagging to a government agency, a spokesman for the company said it sometimes includes legitimate-looking posts in its reports if they could be “weaponised”.

“Context matters,” the spokesman explained. “It is possible for content that isn’t specifically mis- or disinformation to be included in a report if there is evidence or the potential for a narrative to be weaponised.”

The spokesman did not elaborate on the criteria Logically uses to determine the point at which lawful yet dissenting, sceptical or contrarian speech meets the threshold for consideration as ‘weaponisable’.