
The Online Safety Bill returns to the Commons next week (Mail, Epoch Times), and the latest iteration of the legislation seems, on the face of it, to be an improvement on the previous version — although the devil will be in the detail.
Plans to introduce a new harmful communications offence in England and Wales, making it a crime punishable by up to two years in jail to send or post a message with the intention of causing “psychological harm amounting to at least serious distress” have been scrapped (iNews, TechCrunch).
It means that the new version of the Online Safety Bill won’t criminalise saying something, whether online or offline. That’s good news. As Kemi Badenoch pointed out during this summer’s Conservative Party leadership contest, it’s not the role of government to be “legislating for hurt feelings” (FT).
The bad news is, the new communications offence was intended to replace some of the more egregious offences in the Communications Act 2003 (henceforth ‘CA’). As things stand, the offences in the Malicious Communications Act 1998 (‘MCA’) will be repealed, but not section 127 of the CA.
The latter will remain on the statute books, and the FSU will continue to campaign for its removal. The restrictions imposed on what you can say, whether online or offline, by the CA are, we believe, out of date and may, in part, be incompatible with the European Convention on Human Rights — that’s a claim we’re hoping to test in Strasbourg as part of FSU member Joe Kelly’s forthcoming appeal against his conviction and sentencing for posting a tweet that contravened the CA, section 127(1)(b). (You can read about that case here; and our latest briefing document on the CA is available here).
That said, this latest tweak to the Online Safety Bill does at least solve one problem the FSU has previously flagged up to the Government about the impact the legislation was likely to have in a multi-national state like the UK. Thanks to a little-known flaw, the legislation required category 1 providers to remove content that is illegal in every part of the UK if it’s illegal in any part of the UK.
The critical issue here was that the Online Safety Bill would only have repealed the relevant MCA offences in England and Wales, not Northern Ireland, thereby creating a risk that social media platforms within scope of the new online regulatory regime would not only have had to remove content that is unlawful under the new harmful communications offence, but content that was unlawful under the existing MCA, since that will remain on the statute book in Northern Ireland. In other words, when it came to what you could and couldn’t say online, the new harmful communications offence wouldn’t have replaced one set of rules prohibiting speech with another; it would have just added a new set.
Thankfully, that problem won’t now arise.
Another positive is that the clause in the Bill which obliges providers to remove content everywhere in the UK if it’s illegal anywhere in the UK won’t now apply to the Hate Crime and Public Order (Scotland) Act — a piece of legislation that FSU Scotland Advisory Council member Jamie Gillies has previously described as “an authoritarian mess” (Spiked). The Government amended the Bill in July, so providers won’t be obliged to remove speech that’s prohibited by new laws that have been made by the devolved parliaments (Critic).
Fears that Nicola Sturgeon might become the content moderator for the whole of the UK can be laid to rest – at least for the time being.
Although as you may have guessed, the key phrase there is “for the time being”. Because it’s worth bearing in mind that a future Labour government could change that with a single, one-line amendment to the Bill. And given there’s now a chance Labour will form a minority UK government supported by the SNP after the next election, this is an issue that may well come back to haunt the Conservative Party in years to come.
In sum, the latest version of the Bill won’t criminalise saying something, whether online or offline, that causes “psychological harm amounting to at least serious distress” or, in Kemi Badenoch’s words, “hurt feelings”. For that, at least, we should be grateful.
Another positive in the Culture Secretary Michelle Donelan’s revised version of Bill is that clause 13, which would have forced providers to set out in their terms of service how they intended to “address” content that’s legal but harmful to adults, has now been scrapped. That’s good news, of course, but we shouldn’t exaggerate how much of a win this is. Some papers are reporting this as removing any reference to ‘legal but harmful’ from the Bill, yet the phrase ‘legal but harmful’ has never actually appeared in any versions of this Bill.
It’s also worth pointing out that thanks to an amendment the FSU successfully lobbied for in July, the previous version of the Bill would have made it clear that one of the ways providers could ‘address’ this content would have been to do absolutely nothing (you can watch Adam Afriyie MP setting out that amendment in the House of Commons here).
The Times this week suggested that “the Government has dropped plans to force social media and search sites to take down material that is considered harmful but not illegal”, but that’s not correct. It never planned to force sites to do that.
The FSU’s objection to clause 13 was always more nuanced. It was that if the Government published a list of legal content it considered harmful to adults and created an obligation on providers to say how they intended to ‘address’ it, that would ‘nudge’ them to remove it.
Even though the option to do nothing was available to providers in the previous version of the Bill, it would have been a brave social media company that chose this option, given that the Government had designated the content as harmful to adults.
Another objection to the previous version is that, according to its provisions, the list of legal content that was harmful to adults was going to be included in a statutory instrument and not in the Bill itself. One concern the FSU and other free speech groups had about this arrangement was that it created a hostage to fortune — once the initial list of legal content had been drawn up it could easily be added to by another statutory instrument, thus creating a ratchet effect.
In that the person occupying the role of secretary of state now — or in the future — would be able to use the mechanism of the SI to lean on the tech giants to remove speech they disliked or disapproved of, the Government of the day would possess unprecedented powers to restrict speech, particularly if it removed the option to do nothing in response to the enlarged Index Librorum Prohibitorum.
Despite the previous version of the legislation having only reached Committee stage in the House of Commons, this ratcheting effect on our civil liberties had already started to happen in real time. Back in June, for instance, SNP and Labour politicians on the Committee scrutinising the Bill laid an amendment to include “‘health-related misinformation and disinformation’ as a recognised form of lawful but ‘harmful’” speech. That found itself into the ‘indicative list’ of content that was going to be designated legal but harmful to adults published by Nadine Dorries, then the Culture Secretary. (FSU General Secretary Toby Young has written about the Bill’s ‘ratchet effect’ for the Critic.)
In the new version of the Bill, however, the list of legal content that’s harmful to adults won’t be included in a supplementary piece of legislation, but on the face of the Bill. (It’s not being described that way by the Government, but that’s essentially what it is.)
In the latest DCMS press release (29th November), the list is set out as follows: “legal content relating to suicide, self-harm or eating disorders, or content that is abusive, or that incites hatred, on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation”. That’s almost identical to the ‘indicative list’ of legal but harmful adult content set out by Nadine Dorries in July, which rather suggests that reports of its death are greatly exaggerated. (Although, credit where credit’s due, the list no longer makes reference to ‘health related misinformation’). The FSU expects similar wording to appear in the new version of the Bill when it returns to Parliament next week.
Another key change is that the new Bill attempts to shift the locus of responsibility away from ‘paternalist providers’ and towards ‘empowered users’. Instead of saying how they intend to ‘address’ harmful content in their terms of service, providers will now have to say what tools they’re going to make available to their users who don’t want to be exposed to it. That’s what Michelle Donelan means when she says in the Telegraph that, “I have removed ‘legal but harmful’ in favour of a new system based on choice and freedom”.
What we’re talking about here is essentially the social media equivalent of a ‘safe browsing’ mode. That’s an improvement, not because the previous version of the Bill forced the big social media platforms to ban it outright — it didn’t — but because it won’t ‘nudge’ them into doing so in the way the previous version did.
Essentially, the Bill is signalling to providers that if they want to make this content available to adult users in an unrestricted form they can, and the Government would have no objection, even though it considers it harmful. The only proviso is that if providers do intend to make this content available to their adult users, they have to provide them with the tools to restrict it if they wish to do so — the social media equivalent of a ‘safe browsing’ mode.
On the face of it, this is an attractive approach. It empowers adult users to choose what content they’d like to see, with the state no longer nudging providers into restricting legal but harmful content for everyone, regardless of their appetite for this material.
Nevertheless, the FSU has some concerns about this ‘user empowerment’ model. For instance, there’s a significant risk that the big providers (e.g., Facebook, Twitter, YouTube) will make the ‘safe’ mode their default setting, so if adult users want to see ‘lawful but awful’ content they will have to ‘opt in’.
That may result in perfectly lawful yet politically contentious views — e.g., gender critical statements about transwomen not being women, or the need to ban transwomen from women’s spaces — being blocked by default since woke identity groups will argue that any such views constitute abuse or incitement to hatred based on their protected characteristics.
Of course, users would have the option of adjusting their settings so they can see that content, but some won’t want to in case, say, a colleague sees something ‘hateful’ over their shoulder and reports them to HR.
And what about those who won’t even be aware they have a choice, or are aware but don’t know how to do anything about it?
We know that that’s likely to be a lot of people thanks to research carried out by behavioural scientists on what’s known as ‘choice architecture’ — i.e., the idea that the way in which customers are presented with choices will influence their subsequent decision-making processes. As countless studies in that area have now demonstrated, one of the most powerful tools available to organisations wishing to ‘nudge’ consumers down certain behavioural pathways is the humble ‘default setting’. Why? Because consumers tend not to take active steps to change systems possessed of a built-in default mode.
So the devil here will be in the detail of what each social media platform’s version of ‘safe browsing’ looks like. If ‘safe mode’ becomes the default setting, for instance, how easy will it be to switch it off? It’s an important question. As one recent study put it, “if defaults have an effect because consumers are not aware that they have choices… [they] impinge on liberty”.
And what about politically contentious websites like The Conservative Woman (TCW) and Novara Media – perhaps even the Spectator? Will their content be judged ‘unsafe’ by providers and only made available to users who ‘opt in’? That sounds far-fetched, but TCW was blocked by default by the Three mobile network earlier this year.
When the FSU looked into it, we discovered it was because the British Board of Film Classification, to which Three had outsourced the job of deciding which websites were ‘safe’, had given TCW an 18 rating. So Three restricted access to TCW for its users because its default settings excluded any websites the BBFC classed as only suitable for adults.
In light of this, it’s not unrealistic to think that under the new system politically contentious websites like TCW will be judged unsafe by the big social media platforms and banned by default (although not outright). And that will have a negative impact on their commercial viability. Not only will it limit their reach, but it will also make it harder for them to get advertising because most companies are concerned about ‘brand safety’ and won’t want to advertise on websites that are deemed unsafe by companies like Facebook. To a certain extent, that happens already, but it could get significantly worse after this Bill becomes law.
There are other free speech issues with the new Bill that Big Brother Watch has flagged up.
Taken as a whole, however, the FSU believes this version of the Online Safety Bill is an improvement on the previous version. Michelle Donelan has had to steer a difficult path between those of us lobbying for more free speech protections and a vast array of groups petitioning her to make the Bill more restrictive, including factions within her own party.