A shadowy state agency previously used to monitor lawful but dissenting speech during the Covid lockdowns has now been implicated in efforts to suppress online criticism of immigration policy, multiculturalism, and ‘two-tier policing’ during the Southport riots in August last year.
Internal emails, disclosed by Congressman Jim Jordan, Chair of the House Judiciary Committee, as part of an investigation into transnational censorship, reveal that officials within a Whitehall monitoring unit flagged social media posts to major platforms, warning that certain content was “exacerbating tensions” during a period of widespread civil unrest.
The unit, known as the National Security and Online Information Team (NSOIT), operates within the Department for Science, Innovation and Technology (DSIT) under the authority of minister Peter Kyle. Formerly the Counter Disinformation Unit (CDU), it was originally established to monitor foreign influence campaigns but saw its remit expanded during the Covid pandemic to include lawful domestic speech, including public health criticism and lockdown scepticism. Conservative MP David Davis, for instance, was among several public figures cited in CDU files as “critical of the Government” after questioning the mathematical reasoning behind Imperial College’s pandemic modelling.
One of the messages released by the US House Judiciary Committee was sent on 3rd August 2024 – the worst weekend of the riots – and warned of “significant volumes of anti-immigrant content” as well as “concerning narratives about the police and a ‘two-tier’ system that we are seeing across the online environment”. The email listed several examples of social media content, including one said to “misrepresent the government’s response to further a sense of division”. Although none of the posts were alleged to breach the law, DSIT pressed TikTok to confirm “what content you are seeing across your platform” and “any measures you have taken in response,” adding: “We’d be grateful if you could come back to us on those two points as soon as you are able to.”
The next day, 4th August, another message flagged with “urgency” a TikTok video showing a user scrolling through a rejected Freedom of Information (FOI) request about the location of asylum hotels. An NSOIT official wrote to the platform: “We wanted to draw your Trust and Safety team’s attention to this piece of content. It shows scrolling of a freedom of information request posted on this webpage, here, and refers to asylum seekers as ‘undocumented fighting age males’.” The message concluded with a further prompt: “Please could this be assessed… There is a definite sense of urgency from here.”
The decision to flag material based on an FOI request suggests that not only the content of speech, but the fact that someone had publicly acknowledged seeking official information through formal channels about immigration policy, was considered sufficiently inflammatory to warrant intervention. In a parliamentary democracy, that sets a dangerously low threshold for state involvement in the regulation of lawful speech.
In a separate exchange, officials highlighted a video posted on X showing street celebrations in Manchester, where men were seen waving Pakistani flags. Captioned simply, “Looks like Islamabad but it’s Manchester”, the clip was uploaded by the account Radio Genoa and received more than 14 million views. NSOIT officials, apparently applying their own interpretive lens, characterised the video as an example of “misleading or dangerous narratives”, and claimed it had been “shared in order to incite fear of the Muslim community”.
Government sources told the Telegraph that Mr Jordan’s committee had misunderstood the role of NSOIT, which they said was to assess whether tech companies were taking action on harmful content, not to order them to remove it. A spokesperson insisted: “The Government has no role in deciding what actions platforms take on legal content for adults – that is a matter for them, according to their own rules.”
While that may be formally true, it is also the case that DSIT enjoys a special relationship with several social networks by dint of its “trusted flagger status”, something that inevitably gives it extra weight when flagging content for review.
The legal issues at stake are particularly troubling. Article 10 of the European Convention of Human Rights makes clear that member states’ interferences with the right to freedom of expression should be defined in law. Yet NSOIT is not authorised by an act of Parliament and has no formal judicial or law enforcement function.
As to its future, it’s worth noting that, during a post-lockdown debate on the Online Safety Bill in the House of Commons in 2022, Chris Philp, then Minister for Technology and the Digital Economy under the previous Conservative government, remarked: “As far as I am aware we intend to continue with the [CDU, now NSOIT]… Clearly, I cannot commit future Ministers in perpetuity, but my personal view – if I am allowed to express it – is that that unit performs a useful function and could valuably be continued into the future.”
Clearly, Peter Kyle feels the same way. But in the wake of last year’s civil unrest, and amid all the increasingly fraught debates now taking place online, the question is, is that “useful function” monitoring people committing criminal offences online… or spying on those who question the benefits of unfettered immigration, the UK’s longstanding policy of multiculturalism, or accuse the authorities of ‘two-tier policing’?