The EU AI Regulations: taming the machine

Finally, after nearly three years of consultation, white papers and industry input, on 21st April 2021, the European Commission published its proposals for Laying Down Harmonised Rules on Artificial Intelligence (the “Regulations”). The over-arching aim of the Regulations is to ensure that fast changing AI technology is applied and supplied across Europe according to a single framework rather than inconsistent national laws that may be sporadically applied. 

As I type this on my Apple device, the term “AI” is auto-corrected to “So”. It is always good to realise that the AI in my device is one step ahead of me; it reminds me why regulations like these are needed and why I love my job. For the sake of balance and to acknowledge that “other AI systems are available”, let’s take Apple’s “So” and ask:

  • So, how will these Regulations regulate AI? 
  • So, what is the real effect of the Regulations?
  • So, does the fact that the Regulations are an EU document have a bearing on AI development and use on our island of Britain, post-Brexit? 
  • So, do the Regulations stray into the field of ethics and attempt to answer some of the wider fears that people have about AI? 
  • So, do the Regulations support socially and environmentally beneficial outcomes and provide key competitive advantages to companies and the European economy?

To answer these questions, we should perhaps go back a step and consider the following questions.

What is AI?

Lawyers love a definition, and even better one that can change and be interpreted and give endless scope for contractual and court interpretation. Lawyers adore definitions which are intended to relate to technology but describe themselves as be “technology neutral”, and lawyers rejoice in laws that provide for regular change (Article 4 of the Regulations allow the Commission to amend the list of techniques and approaches considered as “AI”). With that as an introduction, for now at least, the Regulations borrow from the OECD’s definition in the 2019 “Recommendation of the Council on Artificial Intelligence” and define AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.” 

How will the Regulations regulate AI? 

The Regulations acknowledge the broad range of applications of AI, from those that are software-based and acting in the virtual world such as search engines, voice assistants or bots which people widely accept, to the more controversial types of image analysis software such as speech and face recognition systems. The Regulations acknowledge that AI can be embedded in hardware devices such as robots, autonomous cars, drones or IoT applications and this breadth indicates that the definition does need to be flexible. 

Much of the consultation that led to the Regulations called for rules that sought to address the opacity, complexity and algorithmic bias in AI. It is a given that AI is complex by its very nature as it often has an element of unsupervised learning or being “self taught” (and after months of home-school, many of us know that unsupervised learning can quickly go “off curriculum”!). The consequence of this is that the basis of algorithmic decision making cannot always easily be identified and cannot therefore be interrogated for “fairness”. The issue of coding bias is increasingly recognised too. Whether that stems from the conscious or unconscious bias of the coding actor or from the data sets used to teach the AI system which can by its nature contain “confirmatory bias”, such bias should have no validity and should not be perpetuated in the world of AI.

The Regulations recognise that these specific characteristics of AI (opacity, complexity, dependency on data, autonomous behaviour) can adversely affect the fundamental rights of citizens that are enshrined in the EU Charter of Fundamental Rights. By proposing requirements for “trustworthy AI and proportionate obligations”, the Regulations seek to address the following elements of the Charter: Article 1, the fundamental right to human dignity; Articles 7 and 8, the respect for private life and protection of personal data; Article 21, non-discrimination; Article 23, equality between women and men; Article 11 the rights to freedom of expression; Article 12 the right to freedom of assembly; and Articles 47 and 48 being the protection of the right to an effective remedy and to a fair trial, the right of defence and the presumption of innocence.

Whilst it cannot be argued that the Regulations should omit protections which bolster existing EU laws such as the Charter on Fundamental Rights, this is where the Regulations must walk a fine line between merely being aspirational assertions and being too prescriptive and therefore risking becoming rapidly obsolete as AI develops. 

What is the real effect of the Regulations?

The approach taken in the Regulations is two-fold. First the “technology neutral”, rather “vanilla” approach to defining AI aims to ensure the definition is wide enough to capture as many types of machine made decisions and human/machine interactions as possible. The second is that the Regulations state that AI systems placed on the market in the EU or used in the EU must respect existing law on fundamental rights and Union values and must be safe. To do this, the Regulations are applied at all stages of the lifecycle of AI use, from placing the AI system on the market, to  deploying it in business and to making use of the AI in the market. Then, each application of the technology is categorised according to the nature of the use and thus the “risk” it presents to achieving alignment with the Union values and the Regulations. 

How do the “risk classifications” under the Regulations work? First, it is worth noting that they do not apply to private, non-professional uses. Secondly, the risk classification is undertaken according to the specific purpose and the function performed by the AI system that is in use, meaning it broadly aligns with existing EU product safety legislation and so the concepts here are not “new”. 

Title III to the Regulations provides a helpful list of “critical fields” and “use cases” that help to frame the classification by way of examples. The “critical fields” are biometric data, education, recruitment and employment, provision of important public and private services, law enforcement, asylum and migration and justice. The principle of identifying “critical fields” is that the Regulations recognise the number of potentially affected persons, or their dependency on the outcome and the irreversibility of harm that may be caused to them by non-transparent or non-trusted AI decision making, as well as the extent to which existing Union legislation already provides for effective measures to prevent or substantially minimise those risks to users. 

The Regulations provide that all high-risk AI must be developed with transparency in mind; the system must name the provider (to create accountability) and describe the capabilities and limitations of the AI. Robust and accurate security must be in-built to the system. Risk management, like privacy, must be included with AI systems by design. The training data used to train the AI must be available for scrutiny and subject to data governance and oversight if required – datascraping is not adequate for training high risk AI programs, particularly as the gold standard is error free and bias free data – and there must be human oversight with high risk AI. The output of the system must be traceable and test-able and the system may have to be registered with a notified body.

It is anticipated that every EU member state – and because of the supra-national effect of the Regulations, also any country where AI is developed, will designate one or more national competent authorities to supervise the application and implementation of the Regulations and carry out market surveillance activities. In the EU, the notified body will also represent that country in the European Artificial Intelligence Board. 

How will the Regulations affect AI in the UK? 

The Regulations aren’t law in England, but every piece of AI supplied in or to Europe or its citizens must comply with the Regulations irrespective of where it is developed. So, the Regulations do have a direct effect on businesses in all nations of the UK and probably all those living here too.

Do the Regulations stray into the field of ethics?

The Regulations by and large vie between ethical principles and application of the Charter pretty well. For example, the Regulations prescribe that any AI used for biometric identification systems will always require a third party conformity assessment – acknowledging the large scale mis-trust of AI and the principle that humans should still be able to determine their own identity. To see where the Regulations deal with some of the harder ethical issues and individual human concerns, take a look at the “Prohibited AI Practices” in Title II which include:

  • AI systems that deploy subliminal techniques beyond a person’s consciousness;
  • AI systems that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability;
  • state sanctioned AI systems for evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics (profiling);
  • use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary, including (i) a targeted search for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence.

The list would give the most ardent AI conspiracy theorist a warm feeling of belonging and finally being listened to, but would barely begin to appease the late Professor Hawkings (who said “AI will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing”).

It is likely that the Regulations will be implemented and then be quickly reviewed as AI develops and the leviathans like Alibaba, Amazon and Google determine that the Regulations don’t apply to them and tell us “I’m sorry, I’m afraid I can’t do that”. 

Joanne Frears is IP & Technology Leader at Lionshead Law, a virtual law firm specialising in employment, immigration, commercial and technology law. She advises innovation clients on all manner of commercial and IP matters and is a regular speaker on future law. Email j.frears@lionsheadlaw.co.uk. Twitter @techlioness.

Photo from PxHere.

One thought on “The EU AI Regulations: taming the machine”

Comments are closed.