Global AI regulation

In the wake of an avalanche of publicity following the hugely successful roll-out of ChatGPT, governments around the world have been waking up to the transformative effects of generative AI tools upon their societies, economies and legal systems. Stark warnings from leading industry figures such as Sam Altman, Elon Musk and Geoffrey Hinton, about the potential future impact of next-gen AI technologies, have prompted lawmakers to attempt to assess the risks and consider how to regulate the sector without stifling innovation.

In the UK, the government published a white paper in March 2023, suggesting ways of designing a framework for AI regulation. But it does not propose any specific new laws, and has already been criticised as being out of date. Some MPs are calling for the introduction of an AI bill, but this has not transpired at the time of writing. Nevertheless, Prime Minister Rishi Sunak is attempting to take the lead in terms of setting global standards for AI, recently meeting with US President Joe Biden and agreeing on cooperation to mitigate the risks of AI whilst harnessing the benefits, formalised in The Atlantic Declaration. The UK will also host the first Global Summit on AI Safety, due to take place later in 2023. Meanwhile, Russell Group universities have decided to regulate themselves whilst awaiting government action, drawing up a set of AI guiding principles.

One of the issues which is increasingly cropping up in discussions about AI regulation is the necessity for a unified global approach. A clear example of the futility of countries seeking to impose unilateral rules was the short-lived banning of ChatGPT by the Italian data protection authority; citizens simply turned to VPN to circumvent the ban. So, bearing in mind the importance of international coordination, let’s consider some of the different approaches to AI regulation around the world thus far.

EU

The European Union has been a pioneer of internet regulation over the past couple of decades, so it comes as no surprise that it has introduced the world’s first comprehensive set of laws on AI. The EU Act, following on from a proposal for a legal framework on AI which was set out back in 2021 (over a year prior to the public release of ChatGPT), calls for a risk based regulatory approach depending on the application of the particular AI tool, eg AI designed for use in medical devices or law enforcement will be more heavily regulated than music recommendation tools. But arguably the highest risk use of AI – military systems – is outside the scope of the current proposed text of the regulation.

Subject to certain exceptions, AI systems will be banned altogether if they involve:

  • cognitive behavioural manipulation of people or specific vulnerable groups;
  • classifying people based on behaviour, socio-economic status or personal characteristics; or
  • real-time and remote biometric identification systems, eg facial recognition.

Generative AI tools such as ChatGPT will be required to ensure that all AI generated content is properly disclosed, akin to a digital watermark. Furthermore, developers will need to ensure that illegal content cannot be generated, and companies such as OpenAI will be obliged to publish summaries of any copyrighted data used for training purposes.

United States

Although a well-publicised senate hearing on regulating AI was held in May 2023 – which included Sam Altman, the chief executive of OpenAI, warning that “regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful [AI] models” – thus far there is a dearth of any tangible AI regulation in the US. Further senate hearings on the matter are taking place.

The White House Office of Science and Technology Policy (OSTP) issued a Blueprint for an AI Bill of Rights around the time ChatGPT was released to the public, at the tail end of 2022, which sets out guidelines for the development of AI systems – but these are discretionary.

It’s worth noting that New York City has introduced a local law – New York City Bias Audit Law – which requires companies to audit any automated tools, designed to prevent unconscious bias in AI recruitment technology.

Despite the lack of formal rules, it’s likely that US lawyers will be regulating their own use of AI following the recent faux pas which resulted in two New York lawyers being fined after submitting a legal brief which included fake case citations generated by ChatGPT.

China

The State Council of the People’s Republic of China published the “Next Generation Artificial Intelligence Development Plan” back in 2017, which contained a set of ethical guidelines for dealing with AI. Since then, various policy documents and regulations have been published – notably (i) the Provisions on the Management of Algorithmic Recommendations in Internet Information Services, which aim to regulate algorithmic recommendation activities used by apps and other internet services and (ii) the Provisions on the Administration of Deep Synthesis Internet Information Services, which aim to tackle “deepfakes”. Meanwhile, at draft stage, the Measures for the Management of Generative Artificial Intelligence Services have been drawn up primarily to regulate generative text services such as ChatGPT.

Canada, Brazil and Japan

Canada is moving forward with its Artificial Intelligence and Data Act (AIDA), which seeks to reduce the risk of harm or bias associated with AI tools, and to prohibit AI systems that may cause serious harm to individuals or their interests.

Brazil is working on a draft AI Act which, similar to the EU AI Act, aims to classify different uses of AI according to level of risk, and to regulate them accordingly. Proposed fines for non-compliance reflect the lower levels of GDPR fines, of 50 million Brazilian reals (approx €9 million) or 2% of a company’s annual turnover.

Despite being a very high-tech nation, Japan has not taken any legislative steps so far to regulate AI. However, it does have a few guidance documents, including the Governance Guidelines for Implementation of AI Principles which serve as a tool for companies involved in the development of AI tools.

As the technology develops, more governments will be forced to consider its impact upon their citizens, and will need to figure out the best methods of regulating its use in their jurisdictions. But, just as with the issue of big tech avoiding tax through profit shifting, solutions will need to be devised which tackle the global nature of AI technological tools.

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo by Mojahid Mottakin on Unsplash.