The UK’s online safety watchdog has urged social media companies to go “above and beyond” their legal obligations under the Online Safety Act by censoring “misogynistic” and “hypermasculine” content – even when it is entirely lawful.
In its latest guidance, guidance, Ofcom calls on tech firms to take comprehensive action against “misogynistic influencers” and online speech that could “normalise harmful beliefs,” even if it does not breach the law. This move confirms what free speech advocates have long warned: the Online Safety Act’s regulatory framework could create a culture of “if in doubt, cut it out,” as platforms seek to avoid falling foul of the watchdog’s enforcement powers.
The Act grants Ofcom the ability to fine companies up to £18 million or 10% of their global annual turnover, whichever is higher, if they fail to prevent illegal content from spreading. But now, for the first time, the regulator is nudging firms to go beyond their legal duties, pressuring them to restrict lawful speech to align with broader government objectives on online safety.
Ofcom’s report states: “Misogynistic speech is often not illegal, but, at scale, it can normalise harmful beliefs in men and boys and impact women and girls’ experience both online and offline.” It warns that social media algorithms are “rewarding” such content with greater reach among young men and boys and urges platforms to act accordingly.
But this raises an obvious question: what exactly is “misogynistic speech,” and who gets to define it? While few would deny that misogyny exists as a social problem, the term itself is both broad and subjective, stretching from the unambiguously hateful to the merely provocative or politically unfashionable. What’s more, Ofcom’s own definition suggests that speech does not have to be overtly misogynistic in intent to be flagged as a problem. Instead, it is enough that, “at scale,” it might shape attitudes in undesirable ways.
The potential consequences of such an elastic framework are hard to ignore. Given the creativity of language use in everyday life, how will automated systems—often blunt instruments at the best of times—detect “misogyny” with any degree of nuance? The likely outcome is sweeping over-censorship, as platforms err on the side of caution, removing content that risks tripping Ofcom’s expanding regulatory thresholds. This concern is far from hypothetical: studies on past regulatory interventions in Germany and the EU have shown that voluntary or semi-coerced moderation regimes often lead platforms to take down more speech than strictly necessary to avoid regulatory risk.
The guidance does not name individuals, but public discussion has centred on figures like Andrew Tate – a British former kickboxer and self-proclaimed “misogynist” who has amassed millions of followers online. However, Ofcom’s recommendations go far beyond high-profile cases like Tate’s, setting out a broader expectation that companies should restrict speech even when they are not legally required to do so.
The regulator has launched a consultation on its proposals, suggesting platforms introduce “nudges” to deter users from engaging with misogynistic content, demonetise certain accounts, and use AI tools to detect and block revenge pornography. It has also urged companies to clamp down on “pile-ons” and harassment against women, alongside their legal duties to tackle cyberflashing and digital stalking.
Melanie Dawes, Ofcom’s chief executive, said: “No woman should have to think twice before expressing herself online, worry about an abuser tracking her location, or face the trauma of a deepfake intimate image of herself being shared without her consent.” She called on companies to “set a new and ambitious standard” for online safety.
While the guidance is voluntary, its publication raises pressing questions about regulatory overreach. The Online Safety Act was already criticised for imposing vague “duty of care” obligations that could pressure tech firms into excessive censorship. Now, Ofcom appears to be actively encouraging them to take an expansive interpretation of their responsibilities—potentially normalising a system where speech need not be illegal to be restricted.
The move also risks further escalating tensions between UK regulators and Silicon Valley. Elon Musk, who reinstated Tate’s X account after acquiring the platform, has been outspoken in his opposition to Britain’s online safety laws, positioning X as a defender of free speech. Mark Zuckerberg, the founder of Facebook, has also criticised European online regulation, accusing policymakers of “institutionalising censorship” to target American tech firms.
Ofcom is currently in the process of implementing the Online Safety Act, which received Royal Assent in October 2023. The legislation introduces a phased approach to enforcement, with duties related to illegal content becoming enforceable around March 2025. In December 2024, Ofcom published its Illegal Harms Codes of Practice and accompanying risk assessment guidance, requiring service providers to complete their assessments by mid-March 2025.
The regulator’s forthcoming guidance on “misogynistic” content remains voluntary – unlike the Act’s binding duties on illegal content and child safety. But as past voluntary agreements between governments and tech companies – such as the EU’s Hate Speech Code of Conduct – have shown, “voluntary” in name often means de facto mandatory in practice. Faced with pressure from regulators and the media, platforms are unlikely to wait for direct enforcement; instead, they will pre-emptively tighten their speech policies, ensuring that anything remotely contentious is removed before it becomes a liability. Under the guise of “safety,” lawful but politically sensitive speech could become collateral damage.
There’s more on this story here.