His Majesty’s Inspectorate of Constabulary and Fire & Rescue Services (HMICFRS) has published the second tranche of its report on the police response to the serious disorder that followed the Southport murders in July 2024.
Commissioned by the Home Secretary, the review sought to identify lessons from what was, according to the National Police Chiefs’ Council, the most significant mobilisation of public order resources since 2011. Tranche 2 focuses in particular on how the police managed intelligence, online content – including so-called ‘mis-’ and ‘disinformation’ – and post-disorder criminal investigations.
In parts, the report offers a welcome corrective to the more alarmist narratives that circulated in the wake of the riots.
At the time, government rhetoric was unequivocal: the far-right had weaponised digital platforms to orchestrate violence from outside the communities affected. Speaking at a press conference on 1 August, Prime Minister Keir Starmer declared: “As far as the far-right is concerned, this is coordinated, this is deliberate.” The unrest, he said, had been “clearly driven by far-right hatred,” fuelled by online disinformation – particularly false claims that the suspect in the Southport murders was a migrant. “Violent disorder [was] clearly whipped up online,” he warned, adding a pointed message to the social media platforms themselves: “It’s happening on your premises, and the law must be upheld everywhere.”
However, HMICFRS explicitly found “no conclusive or compelling evidence” that the disorder “was deliberately premeditated and co-ordinated by any specific group or network.” Most offenders were local, often young, and had no ties to extremism. The report also cites the Children’s Commissioner, who similarly concluded that conversations with those arrested “do not support the prevailing narrative… that online misinformation, racism or other right-wing influences were to blame.” Although ‘harmful’ online content may have circulated, the report acknowledges the causal factors were “more complex than were initially evident,” including longstanding social deprivation, loss of trust in policing, and generalised political disaffection.
But embedded in the report is a section that reads very differently. Having noted that only three people were convicted under the Online Safety Act, the report pivots not to scepticism about the legislation, but to frustration that it lacks real-time bite. “Unless regulation and enforcement of illegal content is strengthened, and the capability is established for its immediate removal,” it warns, “the provisions of the Act will have little or no bearing on the real-time effects of online content related to rapidly evolving serious disorder.”
From a free speech perspective, one might view the absence of real-time takedown powers as a principled safeguard against censorship. HMICFRS, by contrast, treats it as a flaw; a gap to be closed. Perhaps that’s why, in one recommendation, HMICFRS calls for enhanced police capacity to “recognise, analyse and respond to information and intelligence on disorder, particularly at times of national emergency.” That, in itself, is unremarkable: the focus remains on disorder, which is a core concern for the police.
What is more disquieting is the way the logic occasionally drifts. In another passage, the report states:
“In the context of a national emergency, such as the widespread disorder that took place in summer 2024, once content is posted the potential harm is near instant.”
Here, the reference to disorder is illustrative rather than integral – a parenthetical example. What’s left is the formulation: in a national emergency, online content causes near-instant harm. The object of concern is no longer necessarily the event, but the content itself. The logic pivots from managing unrest to managing speech, and from treating speech as contributory to treating it as constitutive of emergency.
It’s a subtle shift, but it lays the rhetorical foundation for a much more expansive conception of regulatory risk in which speech is not merely disruptive, but dangerous in itself.
Is this reading too speculative? Unfortunately not. The same logic underpins the proposals advanced last August by the Centre for Countering Digital Hate (CCDH), whose post-riot policy paper urged the government to amend the Online Safety Act to grant Ofcom sweeping “emergency response” powers. The recommendations were developed at a closed-door meeting, held under the Chatham House rule and attended by officials from DSIT, the Home Office, Ofcom and the Counter Terrorism Internet Referral Unit — a structure that enabled CCDH to present its policy programme as if it were consensus.
Among the most far-reaching proposals was a call to revise section 175 of the Act so that the Secretary of State for DSIT could issue legally binding directives to Ofcom during moments of “emergency” or “crisis,” compelling the removal of content deemed a threat to “national security” or “the health or safety of the public.” On paper, this may sound defensible. In practice, it is fraught with risk. “Emergency” is a pliable term, always liable to expand under pressure. And the line between “misinformation” and “plausible hypothesis” often turns not on accuracy, but on timing. The lab-leak theory of Covid-19 – dismissed as conspiracy in 2020, cautiously endorsed in 2023 – is a case in point.
This is not a hypothetical concern. The current Secretary of State for DSIT, Peter Kyle MP – a close ally of Sir Keir Starmer – has already signalled frustration with the limits of the current regime. Under CCDH’s model, it would fall to him to determine not only when an emergency exists, but what speech must be suppressed within it. Take, for example, the climate debate. Kyle was among the Labour MPs who proudly claimed to have forced the government to declare a “climate emergency” in 2018. CCDH, meanwhile, defines “climate denial” as “arguments used to undermine climate action.” It is not difficult to imagine a future in which criticism of Net Zero policy is reframed as a threat to public safety, not because it is false, but because it is politically inconvenient.
This, ultimately, is the free speech risk embedded in both the CCDH agenda and the HMICFRS report: the gradual normalisation of censorship in the name of crisis response. When emergency becomes the operative category, and definitions of harm are shaped by political authority, dissenting views can be swiftly recast as public threats. The danger lies not just in the powers themselves, but in the expanding logic that justifies their use.