As anti-migrant protests continue to spread across the UK, the Government is launching a national police unit to monitor social media for early signs of protest activity and public dissent – just as new online safety laws begin to restrict what unverified users can see and share. The combined effect of this pincer movement – monitoring those who post and limiting how much of their content others are allowed to see – is expected to redraw the boundaries of acceptable speech, shifting how digital dissent is tracked, regulated and, increasingly, suppressed.
Detectives from across the UK will be seconded to a specialist investigations team tasked with identifying emerging threats linked to civil disorder. The division, assembled by the Home Office, is intended to “maximise social media intelligence” after forces were criticised for failing to anticipate the scale and coordination of last summer’s riots.
Its creation comes amid growing concern that Britain may face another season of unrest, with demonstrations outside asylum hotels increasing in size and frequency. On Saturday, crowds gathered in Norwich, Leeds and Bournemouth, with further protests expected.
The new unit, formally titled the National Internet Intelligence Investigations team, will be based at the National Police Coordination Centre (NPoCC) in Westminster. The NPoCC coordinates police deployments for “nationally significant protests” and civil emergencies, and previously oversaw Operation Talla, the police response to the Covid-19 pandemic and associated lockdown enforcement.
Details of the new team were set out in a letter from the Policing Minister, Dame Diana Johnson, to Dame Karen Bradley, the Chair of the Home Affairs Select Committee. Published shortly before Parliament rose for the summer, the letter followed a request from the Committee for further clarity on how the Government planned to implement its recent recommendations on public order policing.
In its April report on the 2024 summer disorder, the Committee warned that forces were struggling to monitor and respond to fast-moving online threats. It urged the Government to ensure that any reform of the national policing system included “enhanced capacity to monitor and respond to social media at the national level”, while also supporting local forces to develop intelligence-gathering capabilities of their own.
Although many police forces had attempted to track protest activity online, the Committee found that they were often overwhelmed by the volume, velocity and complexity of posts, particularly on encrypted platforms like Telegram. Several senior officers described the process of verifying online information as “resource intensive”, with one warning that ‘false positives’ could result in forces misallocating limited resources.
In her response to the Committee, Dame Diana acknowledged that social media had played a “significant role in inciting violence” during the unrest, and said the Government was “carefully considering” the Committee’s recommendation. The new internet intelligence team, she explained, would “provide a national capability to monitor social media intelligence and advise on its use to inform local operational decision making”. It would also serve as “a dedicated function at a national level for exploiting internet intelligence to help local forces manage public safety threats and risks”.
Dame Diana also cited the Online Safety Act, which came into force last week, as a complementary strand of the Government’s efforts to tackle problematic speech and expression. Although the legislation was not in effect during last summer’s unrest, she described it as a “key plank” of the strategy to compel platforms to remove illegal content more swiftly and to tighten enforcement of age restrictions.
Under provisions introduced on 25 July, sites hosting adult content must now implement what Ofcom, the communications regulator, calls “highly effective” age verification measures – including, in some cases, photo ID checks – to prevent under-18s from accessing material deemed unsuitable. In theory, adults can regain access by verifying their age. But X (formerly Twitter), in particular, currently offers no clear mechanism for doing so. Age checks are being rolled out gradually and appear to activate only in response to opaque internal triggers, leaving many users effectively defaulted into restricted mode with no obvious route to restore full access.
The timing of the new rules has coincided with a wave of protests outside asylum accommodation sites, some of which appear to have been partially obscured on social media platforms. In at least one instance, footage of a demonstration in Leeds was briefly inaccessible to users in the UK, apparently due to the platform’s age-verification settings. There’s more on this story here.