The Online Safety Bill – the quest for clarity continues

In a June 2021 article in the Newsletter I looked at the then-current draft of the Online Safety Bill, providing a general overview of the government’s aims in putting forward that draft legislation and considering the challenges presented by some central points of uncertainty at the heart of the bill.

Since that time, there have been two major developments. The Joint Select Committee appointed to study the draft bill in its pre-legislative stage reported towards the end of 2021, making a number of recommendations as to the content of the bill. After that, in the spring of this year, the government published its response to those recommendations, and a revised bill which was introduced to Parliament, to begin the formal legislative process.

The Joint Select Committee’s recommendations

The recommendations in the Select Committee’s report were presented as a cohesive package of alterations intended to further the stated objectives of the proposed legislation. The committee were at some pains to emphasise (in bold, within the conclusions) that the government should not approach such recommendations as a series of single, isolated, proposals, and that the benefit of the changes proposed would be in danger of being lost, if the proposals were not adopted as a package. Needless to say, that warning has only partially been heeded.

The first important recommendation made was that the bill should be re-structured to emphasise the core principles (or “objectives”) which underpin much of the detail of the legislation. Although a wide range of such objectives were itemised, the committee recommended that a clear statement of the Bill’s core safety objectives was the most important priority, given how much of the remaining legislation was directed to the achievement of those objectives.

Another important recommendation was directed to the identification of reasonably foreseeable harms and imposing an obligation to design systems and processes in such a way that harms that are “reasonably foreseeable” should be mitigated, if not eliminated. Some effort was made within the report (see eg para 82) to identify specific ways in which design-led solutions to various harms might be adopted, but the reality is that this will leave a significant burden on service providers, particularly as time goes by and new technologies or capabilities are developed where it is, perhaps, harder to anticipate all of the harms that might flow from them.

While some of the harms that might be addressed by “safety by design” might be more specific to individuals, the committee was also clear that its recommendations were intended to address wider societal harms as well. So, for example, there were recommendations directed to countering the spread of misinformation and disinformation. Interestingly, in the context of the government’s proposed digital literacy programme which accompanied the publication of the Online Safety Bill in its final form, the Select Committee emphasised that in their view digital literacy alone is not a solution to the spread of misinformation. Their recommendations included greater powers for Ofcom to tackle online misinformation by requiring service providers to enforce their terms of use more consistently, and also to design away features which accelerate the spread of misinformation or which facilitate online fraud.

Responding to one of the primary concerns with the pre-legislative draft of the Bill, the Select Committee specifically recommended that rather than delegating to service providers the responsibility for identifying what categories of harm ought to be policed, the Bill should do more to spell these out. While providers would still be obliged to put in place measures to guard against reasonably foreseeable harms, this would not be an open-ended requirement. It would instead be an obligation to guard against defined regulated activities, such as abuse or hate speech, harms likely to cause significant psychological distress, or, topically, promotion of misinformation likely to injure public health (e.g. vaccine conspiracy theories), among others. The Committee recognised that this recommendation would create a narrower range of obligations on providers, but considered that this was a worthwhile trade-off in ensuring that those obligations which were imposed were more robust, and that the categories of content harmful to adults that needed to be guarded against would be clearer and more easily understood.

In relation to the core provisions directed to child protection, the Committee recognised that there was already a regulatory test for the circumstances in which a child might be expected to access (or attempt to access) online content, within the Age Appropriate Design Code published by the ICO. Their recommendation was therefore that this same test should be adopted in considering whether the parts of the Online Safety Bill directed to child safety were applicable.

The Government’s response and the published Bill

A significant number of the Select Committee’s recommendations have been adopted in the final version of the Bill that has now been submitted to Parliament:

Overall objectives and structure

The government has made some attempt to simplify the structure of the legislation, while continuing to assert that some degree of complexity is unavoidable. In particular, they have taken on board the Select Committee’s recommendations about the simplification of the categories of harm relevant to adults, the importance of being clearer about the legislative basis for prohibiting those harms, on the face of the Bill, and clarifying the duties to which platforms are subject. If the Bill is passed in its current form, there will still be a significant burden on platforms to understand precisely what harms they will need to guard against, and how those obligations might be met, but there will at least be some clearer parameters to that investigation.

Scope of the Bill

As noted above, one specific recommendation of the Select Committee was an expansion of the scope of the Bill to guard against online fraud. This recommendation has been accepted, and the government has expanded the scope of the legislation (for the most substantial service providers) to require those providers to have in place systems to mitigate or prevent the use of their platforms to perpetrate fraud against other users.

Online harms to adults

Responding to the Select Committee’s recommendations, the Government has expressly identified the “priority illegal offences” which providers are required to guard against. This will included new offences recommended by the Law Commission, such as harm-based or threatening communications, and “cyber-flashing”. As detailed above, this has the effect of creating a clearer and more robust regime in relation to the core harms which are the focus of the legislation. Providers will still, however, need to be alert to the potential for proliferation of a range of other harms that might be identified in subordinate legislation or non-statutory guidance over time, and the language of the legislation, as presently drafted, still leaves considerable scope for those providers to be subject to obligations in relation to harms which they are obliged to have proactively identified and designed a solution for.

Online harms to children

This was already one of the most robust sections of the legislation, for understandable reasons. The Government has nevertheless included some further provisions in response to the Select Committee’s report, including a requirement on commercial pornography providers to prevent children from accessing their services. There is also some welcome effort to clarify and standardise the language of the legislation in connection with the obligations imposed on providers whose content might be expected to be accessed by children. The Government did, however, decline to align the tests within the Bill more explicitly with those in the Age Appropriate Design Code. The Government’s view on this is that the test in the Bill needs to be more stringent and more carefully articulated because of the more serious consequences that follow from a provider being within its scope. While that may be right, it presents a continuing challenge to providers of a degree of regulatory misalignment between the circumstances in which the data protection and online harms regimes will require them to have regard to the interests of prospective child visitors to their platforms or sites.

Safety by design

The Government’s preference in relation to these recommendations has been to leave the matter in the hands of Ofcom, which is the regulator tasked with developing the detailed codes which will underpin the primary legislation. But the Government has directed that Ofcom should liaise with the Information Commissioner in the preparation of its risk assessment guidance, which at least points to the potential for a joined up and coherent approach across these two regulatory regimes. It is clear that if the Bill passes in anything like its current form, Ofcom will emerge as a significantly stronger regulator with more wide-ranging and effective powers to secure compliance with the provisions of this legislation, but as has been the case with the Information Commissioner, the powers to investigate, enforce and impose fines are of limited importance if the regulator is not sufficiently well-resourced to pursue that regulatory oversight activity. 

Conclusion

The above is necessarily still at this stage a fairly high level overview and update. The Bill has already passed through its first two readings in the House of Commons. It must still pass through the Committee stage and review by the House of Lords, which are likely to entail fairly substantial amendments. As such, it is probably still premature to be examining too much of the detail of the specific legislative terms, but it will be important to revisit those later in the year. By that time the legislative picture will be clearer, and the true breadth of the regulatory burden likely to be placed on many service providers will have emerged. In the meantime, professional advisers will no doubt already be talking to clients about the scope for them to be within the ambit of the legislation, as presently drawn, and to start to think with them about the work likely to be required over the next couple of years to ensure compliance with this far-reaching regime.

Will Richmond-Coggan is a director and solicitor-advocate at Freeths LLP, specialising in data, privacy and online harms. Email William.Richmond-Coggan@freeths.co.uk.

Photo from PxHere.