How Much Better is the New Version of the Online Safety Bill? (continued…)

As you may have guessed, the key phrase there is “for the time being”. It’s worth bearing in mind that a future Labour government could change that with a single, one-line amendment to the Online Safety Act. Given there’s now a chance Labour will form a minority UK government supported by the SNP after the next election, this issue may well come back to haunt us in the not too distant future.

In sum, the latest version of the Bill won’t criminalise saying something, whether online or offline, that causes “psychological harm amounting to at least serious distress” or, in Kemi Badenoch’s words, “hurt feelings”. For that, at least, we should be grateful.  

Another positive in Culture Secretary Michelle Donelan’s revised version of Bill is that clause 13, which would have forced providers to set out in their terms of service how they intended to ‘address’ content that’s legal but harmful to adults, has now been scrapped. That’s good news, of course, but we shouldn’t exaggerate how much of a win this is.

Some papers are reporting this as removing any reference to ‘legal but harmful’ from the Bill, yet the phrase ‘legal but harmful’ never actually appeared in any versions of this Bill. 

It’s also worth pointing out that, thanks to an amendment the FSU successfully lobbied for in July, the previous version of the Bill would have made it clear that one of the ways providers could ‘address’ this content would have been to do absolutely nothing. (You can watch Adam Afriyie MP setting out that amendment in the House of Commons here.) 

The Times reported that “the Government has dropped plans to force social media and search sites to take down material that is considered harmful but not illegal,” but it never planned to force sites to do that.  

The FSU’s objection to clause 13 was more nuanced. It was that if the Government published a list of legal content it considered harmful to adults and created an obligation on providers to say how they intended to ‘address’ it, that would ‘nudge’ them to remove it. 

Even though the option to do nothing was available to providers in the previous version of the Bill, it would have been a brave social media company that chose this option, given that the Government had designated the content as harmful to adults. 

Another objection to the previous version is that, according to its provisions, the list of legal content that was harmful to adults was going to be included in a statutory instrument and not in the Bill itself. One concern the FSU and other free speech groups had about this arrangement was that it created a hostage to fortune — once the initial list of legal content had been drawn up it could easily be added to by another statutory instrument, thus creating a ratchet effect.   

Insofar as the person occupying the role of Secretary of State now — or in the future — would be able to use this mechanism to lean on the tech giants to remove speech they disliked or disapproved of, the Government of the day would possess unprecedented powers to restrict speech, particularly if it removed the option to do nothing in response to the enlarged Index Librorum Prohibitorum.  

Despite the previous version of the legislation having only reached Committee stage in the House of Commons, nevertheless, this ratchet effect was already visible. Back in June, for instance, SNP and Labour politicians on the Committee scrutinising the Bill laid an amendment to include “‘health-related misinformation and disinformation’ as a recognised form of lawful but ‘harmful’” speech. That found itself into the ‘indicative list’ of legal content that was going to be designated as harmful to adults published by Nadine Dorries, then the Digital Secretary. (FSU General Secretary Toby Young has written about this ‘ratchet effect’ for the Critic.) 

In the new version of the Bill, the list of legal content that’s harmful to adults won’t be included in a supplementary piece of legislation, but on the face of the Bill. (It’s not being described as ‘legal but harmful’ by the Government, but that’s essentially what it is.) 

In the latest DCMS press release (29th November), the new list of ‘lawful but awful’ material is set out as follows: “legal content relating to suicide, self-harm or eating disorders, or content that is abusive, or that incites hatred, on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation.” That’s almost identical to the ‘indicative list’ of legal but harmful adult content set out by Nadine Dorries in July, so reports of its death are greatly exaggerated. (Although, credit where credit’s due, the list no longer makes reference to “health related misinformation”). The FSU expects similar wording to appear in the new version of the Bill when it returns to Parliament next week.  

Another key change is that the new Bill attempts to shift the locus of responsibility from ‘paternalist providers’ to ‘empowered users’. Instead of saying how they intend to ‘address’ harmful content in their terms of service, providers will now have to say what tools they’re going to make available to their users who don’t want to be exposed to it. That’s what Michelle Donelan means when she says in the Telegraph that, “I have removed ‘legal but harmful’ in favour of a new system based on choice and freedom.” 

What we’re talking about here is essentially the social media equivalent of a ‘safe browsing’ mode. That’s an improvement, not because the previous version of the Bill forced the big social media platforms to ban it outright — it didn’t — but because it won’t ‘nudge’ them into doing so in the way the previous version did.  

Essentially, the Bill is signalling to providers that if they want to make this content available to adult users in an unrestricted form they can, and the Government would have no objection, even though it considers it harmful. The only proviso is that if providers do intend to make this content available to their adult users, they have to provide them with the tools to restrict it if they wish to do so — the social media equivalent of a ‘safe browsing’ mode. 

On the face of it, this is an attractive approach. It empowers adult users to choose what content they’d like to see, with the state no longer nudging providers into restricting legal but harmful content for everyone, regardless of their appetite for this material. 

Nevertheless, the FSU has some concerns about this ‘user empowerment’ model. For instance, there’s a significant risk that the big providers (e.g. Facebook, Twitter, YouTube) will make the ‘safe’ mode their default setting, so if adult users want to see ‘lawful but awful’ content they will have to ‘opt in’. 

That may mean perfectly lawful yet politically contentious views — e.g. gender critical statements about transwomen not being women, or the need to ban transwomen from women’s spaces — will be blocked by default since woke identity groups will argue that any such views constitute abuse or incitement to hatred based on their protected characteristics. 

Of course, users will have the option of adjusting their settings so they can see that content,  but some won’t want to in case, say, a colleague sees something ‘hateful’ over their shoulder and reports them to HR. 

And what about those who won’t even be aware they have a choice, or are aware but don’t know how to do anything about it?   

We know that that’s likely to be a lot of people thanks to research carried out by behavioural scientists on what’s known as ‘choice architecture’ — i.e., the idea that the way in which customers are presented with choices will influence their subsequent decision-making. As countless studies in this area have now demonstrated, one of the most powerful tools available to organisations wishing to ‘nudge’ consumers down certain behavioural pathways is the humble ‘default setting’. That’s because consumers tend not to take active steps to change systems possessed of a built-in default mode.  

The devil here will be in the detail of what each social media platform’s version of ‘safe browsing’ looks like. If ‘safe mode’ becomes the default setting, for instance, how easy will it be to switch it off? As one recent study put it, “if defaults have an effect because consumers are not aware that they have choices… [they] impinge on liberty.”  

And what about politically contentious websites like The Conservative Woman (TCW) and Novara Media – perhaps even the Spectator? Will their content be judged ‘unsafe’ by providers and only made available to users who ‘opt in’? That sounds far-fetched, but TCW was blocked by default by the Three mobile network earlier this year. 

When the Free Speech Union looked into it, we discovered it was because the British Board of Film Classification, to which Three had outsourced the job of deciding which websites were ‘safe’, had given TCW an 18 rating. So Three restricted access to TCW for its users because its default settings excluded any websites the BBFC classed as only suitable for adults. 

In light of this, it’s not unrealistic to think that under the new system politically contentious websites like TCW will be judged unsafe by the big social media platforms and banned by default (although not outright). And that will have a negative impact on their commercial viability. Not only will it limit their reach, it will make it harder for them to get advertising because most companies are concerned about ‘brand safety’ and won’t want to advertise on websites that are deemed unsafe by companies like Facebook. To a certain extent, that happens already, but it could get significantly worse after this Bill becomes law. 

There are other free speech issues with the new Bill that Big Brother Watch has flagged up. We’re also concerned about what impact the new ‘user empowerment’ system will have on the duties the Bill imposes on providers to protect ‘content of democratic importance’ and ‘journalistic content’. Will they now only be obliged to protect that content for users who’ve dialled their safety settings down and not in the default, ‘safe’ mode? If so, that’s a worry.

In one respect at least, free speech will be better protected in the new version. The previous one said providers would have “a duty to have regard to the importance of protecting users’ right to freedom of expression within the law”. That was pretty toothless since ‘have regard’ is the least onerous of the legal duties. In the new version, we’re told, that has been beefed up to ‘have particular regard’ which is better. (That’s something the Free Speech Union has been lobbying for.)

Taken as a whole, the FSU believes this is an improvement on the previous version. Michelle Donelan has had to steer a difficult path between those of us lobbying for more free speech protections and a vast array of groups petitioning her to make the Bill more restrictive, including factions within her own party. 

Nevertheless, we still have concerns about the Bill and will be scrutinising it carefully when the new version is published. If, as we suspect, the duties to protect content of democratic importance and journalistic content have lost some of their force, we hope to work with parliamentarians in the Commons and the Lords to reinvigorate them – and, as usual, we’ll be asking for the help of our members and supporters to try to get their MPs on side.

Click below to join:

If you don’t want to become a member, register as a supporter and get our newsletter:



Contact the FSU

Before getting in touch please look at our Frequently Asked Questions, as that may answer the question you have. If you still want to get in touch, please use one of the email addresses below:

Help: If your right to free speech is being infringed or you are being penalised in some way for exercising your lawful right to free speech please email [email protected].

Technical support: [email protected]

Anything else: [email protected]

The Free Speech Union
85 Great Portland Street
London W1W 7LT
+44 (0)20 3920 7865