For years, politicians from across the political spectrum insisted the Online Safety Act would focus solely on illegal content – shielding children from pornography, criminal exploitation, and material encouraging or assisting suicide – without threatening free expression. But from the moment its age-verification duties took effect on 25 July, that reassurance began to unravel.
Social media sites, search engines, and video-sharing services are now legally required to shield under-18s from content deemed harmful to their mental or physical well-being. Failure to comply risks fines of up to £18 million or 10% of global turnover, whichever is greater.
At the heart of the regime is a requirement to implement “highly effective” age checks. If a platform cannot establish with high confidence that a user is over 18, it must restrict access to a wide category of ‘sensitive’ content, even when that content is entirely lawful. This has major implications for platforms where news footage, protest clips or political commentary appear in real time.
Ofcom’s guidance makes clear that simple box-ticking exercises – like declaring your age or agreeing to terms of service – will no longer suffice. Instead, platforms are expected to use tools like facial age estimation, ID scans, open banking credentials, or digital identity wallets.
The Act also pushes companies to filter harmful material before it appears in users’ feeds. Ofcom’s broader regulatory guidance warns that recommender systems can steer young users toward material they didn’t ask for. In response, platforms may now be expected to reconfigure their algorithms to filter out entire categories of lawful expression before it reaches underage or unverified users.
One platform already moving in this direction is X. Its approach offers a revealing – and potentially sobering – glimpse of where things may be heading. The company uses internal signals, including when an account was created, any prior verification, and behavioural data, to estimate a user’s age. If that process fails to confirm the user is over 18, they are automatically placed into a sensitive content filtering mode. As the platform’s Help Center explains: “Until we are able to determine if a user is 18 or over, they may be defaulted into sensitive media settings, and may not be able to access sensitive media.”
This system runs without user opt-in and applies at scale. Depending on how X classifies it, filtered material may include adult humour, graphic imagery, political commentary, or footage of violence.
And already, there are signs that lawful content is quietly being screened out.
One example came on 25 July, the day the Act came into force, during a protest outside the Britannia Hotel in Seacroft, Leeds, where asylum seekers are being housed. A video showing police officers restraining and arresting a protester was posted on X, but quickly became inaccessible to many UK-based users. Instead, viewers saw the message: “Due to local laws, we are temporarily restricting access to this content until X estimates your age.”
West Yorkshire Police denied any involvement in blocking the footage. X declined to comment, but its AI chatbot, Grok, indicated the clip had been restricted under the Online Safety Act due to violent content. Though lawful and clearly newsworthy, the footage was likely flagged by automated systems intended to shield children from real-world violence.
In theory, adult users can regain access by submitting to age verification. But at present, X offers no way to initiate that process. Age checks are being rolled out gradually and, for now, only when triggered by opaque internal signals. As a result, many users appear to be defaulted into restricted mode with no clear route to restore full access. Even where verification becomes available, uptake may remain limited – not necessarily because users reject it outright, but because defaults tend to stick. As behavioural research has consistently shown, most people tend not to change pre-selected settings.
What appears to be emerging isn’t just a two-tier internet, but something subtler and more insidious: a default-off model of speech and expression, where access to lawful content is no longer presumed but withheld until certain hurdles are cleared. On platforms like X, the door is currently closed before users even approach it. Elsewhere, full access depends on navigating a system of checks and classifications. Either way, the longstanding assumption that legal speech should be visible by default is being quietly dismantled.