How will the Online Safety Bill affect my online service?

No-one could fault the ambition of the new Online Safety Bill. It aims to keep children safe, reduce online racism, fraud and harassment, protect democracy and enshrine free speech. All of these are laudable aims, but those who practise in the field of online publication and safety know how hard it has historically been to balance the twin objectives of protecting free speech while curtailing expression that does harm in one way or another.

Where this stems from is a sense that, in contrast to those who cause harmful material to be published or shared, the large providers of internet society services that facilitate much of that publication or sharing are unable to be held directly to account for that. It is a separate, albeit important, question as to whether it is desirable or necessary to be able to hold intermediaries to account, or whether that is likely to stifle innovation and limit the availability of platforms for communication and expression. The government has decided that there is no better alternative, in terms of curtailing harms, than to put in place this legislation that is aimed at introducing a further layer of direct liability (in addition to the liabilities of the authors/publishers themselves) on those who enable publication of, among other things, some extremely troubling and damaging content. 

The objective of the legislation

The draft Online Safety Bill (OSB) which has now been published has taken its cue in a couple of important respects from another recent piece of wide-ranging regulatory legislation: the GDPR. Like the GDPR, the OSB seeks to regulate not by narrowly prescribing the boundaries of what is and is not permissible; but rather by defining in wide-ranging terms the objective of the legislation, and requiring organisations that fall within its scope to conduct themselves consistently with those aims. The impact of this can perhaps most clearly be seen from the fact that the legislation contemplates (and indeed depends to some extent on) the subsequent preparation and adoption of a number of sector-specific codes of practice that will implement the broad brush principles of the legislation in more concrete terms.

Secondly, like the legislation stemming from the NIS Directive, the OSB contains a sanctions regime that is calculated to attract headlines and operate as a potentially powerful deterrent if it makes it through into the adopted legislation intact, namely that an infringing entity will face the possibility of fines of up to whichever is the higher of £18 million or 10% of the organisation’s annual global turnover.

The uncertainty generated by vague and broad-brush legislative prohibitions, coupled with the prospects of eye-wateringly large fines, is likely to drive any number of businesses operating into the online space to seek urgent and detailed advice on the prospects of their being exposed to the risks of these liabilities and the steps that they can take to manage or eliminate these risks. As such, it seemed sensible to start to think about the types of questions which such clients might have in practice, and the sorts of answers which practitioners might currently be able to give, as a prism through which to assess the likely efficacy of this promised new legislation and to think about some of the consequences, intended or unintended, which might flow from its enaction in its current form.

1. Will my service be a regulated service?

This is, of course, going to be the first question from any business that operates online where some part of their function is the delivery of content produced by others. It is, of course, unlikely to extend to online retailers or blogs or showcase sites that only contain content originated by their owners. Equally, though, it is clear from the broad terms in which the legislation is framed that the impact of this will not be confined only to that top tier of organisations that are household names.

The likely ambit of the legislation can be seen at s. 2 which contains definitions of the two main categories of online service intended to be caught by its scope:

  • “User-to-user” services are those where a user may encounter content made available or shared by another user. Note that understanding this definition requires you also to understand what it means to “encounter” (to “read, view, hear or otherwise experience”) “content” (“anything communicated by means of an internet service, whether publicly or privately, including written material or messages, oral communications, photographs, videos, visual images, music and data of any description”) for which you need to look to s. 137 of the Bill.
  • “Search” services are services that include a search engine and are not user-to-user services (this time you need to cross-refer to s. 134 for the definition of a “search engine” (“a service or functionality which enables a person to search some websites or databases” which might mean “all” websites or databases, but excludes services that only search one website/database).

Not every service that meets one of the above definitions will be caught. Only “regulated” user-to-user or search services are in scope. What this means is defined in s. 3 of the Bill and include any service which “has links to the United Kingdom” and is not exempt. A service has links to the UK if a significant number of its users are located there (“significant” not being a defined term), or if the UK is its target market or one of its target markets. Even if a service does not meet either of these, poorly defined, criteria, it may nevertheless be treated as having links to the UK where the service is accessible to users in the UK and there are reasonable grounds for believing that there is a material risk of significant harm to users from content that is on a user-to-user service, or accessible via a search service (again, a number of the significant key terms which enable this provision to be decoded are not defined).

Just before your client’s eyes glaze over they are likely to latch on to the important proviso in the above, namely that services will be regulated “if not exempt”. Thus services that provide e-mail and text messaging, or platforms for online reviews are likely largely to be exempt, as too are user-to-user services that are provided by public sector bodies or foreign governments (s. 3 and Sch. 1 of the Bill). It is worth noting, which is a general fault with the OSB, that a number of these concepts are seemingly assumed to be so commonplace that no great effort needs to be made specifically to define them; a recipe for the legislation to be outpaced by technology in fairly short order, further increasing the uncertainty about its scope and effect.

2. As a regulated service, what are my obligations?

Having established that a service provider will fall within the scope of the legislation, and is not exempt, the Bill sets out a number of wide-ranging obligations which vary depending on whether the service is user-to-user or a search service, and whether the service is likely to be accessed by children or is a “Category 1” service. The categorisation of services is a topic for a whole separate article at a later date, but suffice it to say that this will be undertaken as part of the preparation of a register of service providers by OFCOM, after the legislation has been adopted and regulations made under it, in accordance with general criteria referred to in s.59 and Sched. 4 of the OSB. Broadly, the more users a service has, and the greater the assessed risk presented by the range of content they make available and the way in which they are operated, the higher the category they are likely to be assigned and the more onerous the obligations on them.

Even without categorisation, the obligations on user-to-user services are very wide-ranging and will be extremely difficult to comply with. These include an obligation to undertake, maintain and update an “illegal content risk assessment” which requires an organisation to assess the risk profile of its services by reference to its user base, the functionality of its algorithms, the extent to which service functionalities present/increase the risk of users encountering illegal content, and the nature and severity of the harm that might be caused to users as a result of these (and other) factors. This is one of a number of duties (see ss. 7 and 9 of the OSB generally) in relation to illegal content, and in turn are only one part of the suite of obligations which every provider of a regulated user-to-user service will be subject: they also have duties to safeguard rights to freedom of expression and privacy; protect content of democratic importance; to enable reporting of risks; to ensure adequate redress for harm; and to keep records and review compliance. These core duties are also likely to be supplemented in various ways depending on the precise nature of the service and the extent to which it is aimed at children or, crucially, to which children are “likely” to access it (see, for example, s. 10 of the OSB).

Nor are the obligations on search services any less challenging to comply with. Many of the headline obligations are the same, but rather than relating to the risks presented by the content which users might be able to share with one another on a particular platform, in the case of search engines they are required to assess the risk in respect of the content that might be hosted elsewhere, but returned in results of searches undertaken by their users. Again, the core duties around illegal content, safeguarding expression or democratic/journalistic content, reporting, redress, record-keeping and review are likely to be supplemented, particularly where the service is likely to be used by children. These obligations extend not only to mitigating the harm that might be suffered by children encountering illegal or other harmful content, but actively preventing them from encountering priority illegal content, and protecting them from encountering other potentially harmful content.

3. What will this mean for my business? 

Notwithstanding, indeed because of, the uncertainty inherent in the answers to the first two questions, this third question is perhaps the most problematic to answer. The burden of innovating and then operating wide-ranging content moderation, which is going to involve implementing algorithmically some currently very vague parameters, ought not to be underestimated. The government’s own assessment of the likely commercial impact on the industry is that this aspect alone will cost something in the region of £1.7 billion.

Those costs, which are unlikely to be allocated proportionately to scale, present particular problems for innovators and new entrants into any of these markets and are likely to cement the current dominant positions of the few “Big Tech” operators whose pockets are likely to be deep enough to finance the development work necessary to meet these challenges, and who have the resources that they can divert from core business critical innovation. Similarly, the prospects of very substantial fines, in the event of non-compliance, may well make it unattractive to operate within this area for businesses without the capabilities to meet such liabilities.

Even leaving aside these industry level impacts, the impact at the level of individual businesses is likely to be highly significant. Very few of the obligations that the legislation will impose can be implemented overnight. They are going to divert personnel and activity from other parts of the business for protracted periods, and require them to keep abreast with what looks set to be a rapidly evolving regulatory landscape comprised of primary and secondary legislation, formal codes of practice and more informal guidance. Challenges will arise in relation to the decisions about whether or not particular service providers fall to be regulated at all or, if they are, where they are categorised. Many of the terms that are central to the operation of the legislation will also almost certainly need to be litigated before their precise meaning and scope can be established with any certainty.

Concern about online harms is nothing new. Previous attempts to legislate in this area have faltered or been derailed. But the reality is that the harms that may be encountered online are only becoming more extensive and more troubling, particularly in relation to those who are most vulnerable. The government’s view is plainly that those harms are not being appropriately addressed through self-regulation and, given that view, can be expected to press on with this latest attempt to legislate in this area. But care is required. The significant uncertainty, and the very substantial impacts that the implementation of this legislation may have, must raise the very real concern that legislation in this form will not be adhered to, and consequently that it will not achieve the very ambitious objectives that it has set for itself. Whether a service provider’s principle objective is furthering free speech, protecting its users or simply maximising its profits, this legislation in its present form is likely to make the fulfilment of those objectives more challenging. This means that ambivalence, or indeed ignoring the issue and hoping that it goes away, is not an option. Any business that expects to be impacted by these rules needs to engage constructively to improve the present state of the legislation, if it is to have any hope of being both intelligible and effective.

Will Richmond-Coggan is a director in the data protection team at Freeths LLP. He acts for clients from start-ups to multinationals on a wide range of strategic, commercial and contentious data protection and privacy issues. Email William.Richmond-Coggan@freeths.co.uk. Twitter @Tech_Litig8or.

Image by Robinraj Premchand from Pixabay.