A new joint report entitled A New National Purpose, which explores how the UK can harness innovative technologies to meet future challenges, has recently been published by Tony Blair and William Hague. The “cross-party” report argues that we are currently undergoing a new form of Industrial Revolution “as developments in artificial intelligence (AI), biotech, climate tech and other fields begin to change our economic and social systems”. It calls for policymakers to mitigate the consequent threats whilst embracing opportunities. Several of its proposals touch upon the convergence of law and technology, and we will consider some of these aspects below.
Perhaps unsurprisingly, given Tony Blair’s foiled aspirations to introduce digital ID during his premiership, much of the press attention has focused on the report’s call for the government to “provide a secure, private, decentralised digital-ID system for the benefit of both citizens and businesses”. In practice, this would be likely to take the form of an app which could serve as a passport, driving licence and age verification tool, also storing information such as NHS and National Insurance numbers alongside academic and professional qualifications etc.
The two core recommendations of the report in this regard are to:
- Accelerate the implementation of a single digital-ID system using a “digital wallet” for all UK citizens, which has the same legal status as any physical identity documents.
- Introduce a “Once-Only” data principle, under which government bodies are legally prohibited from requesting data from a citizen if the same data is already held by the separate government agency.
The danger of a single digital ID app is that, if it is hacked, the citizen’s entire official records could be exposed in one fell swoop. It will also undoubtedly reprise fears of government snooping which privacy activists were campaigning against during Blair’s original proposal to introduce identity cards.
The report warns about how technology can be wielded by authoritarian regimes to repress their citizens, alluding to the use of automated facial recognition software being used as part of the Chinese Social Credit System, and in particular to monitor and oppress its Uyghur population. It argues that the UK, in contrast, should develop “frameworks for moral governance of new technology” to ensure it upholds values on human rights. It recommends that:
- The UK government should work with EU and US regulators to agree on shared cross-jurisdictional technology standards [which it denotes as “T3”] in the hope that these could eventually lead to a unified global AI regulatory framework.
- The UK should “initiate and lead a multilateral scientific effort at the forefront of AI” to proactively prepare for the governance of general-purpose AI systems. This would apply to AI software which can be used across multiple sectors.
- The UK foreign office should work with its allies across the world to “introduce export controls on AI to authoritarian countries” and try to prevent investment or takeovers of UK AI companies by foreign companies which are potentially linked to certain regimes.
However, at the same time as suggesting the creation of broad regulatory AI frameworks, the report also criticises the failure of government to take advantage of the theoretical “Brexit dividend” of cutting red tape to encourage technology investment:
“Ministers have made a start on considering where UK regulation can be made more nimble and efficient in areas such as gene editing and clinical trials, but regulatory restrictions on innovation remain relatively high. For example, three years on, legislation for autonomous vehicles remains stuck, we kept EU rules on robotics that Germany has subsequently abolished so that they can build fully autonomous warehouses, while novel food companies are moving overseas due to regulation.”
The report also suggests that planning regulations should be loosened to “create exemptions and fast-track processes for R&D infrastructure planning” to enable the “construction of tech-relevant infrastructure such as laboratories”.
The report makes reference to generative artificial intelligence (AI) software such as ChatGPT and Stable Diffusion, alluding to some of the problems of generative AI in the context of academia:
“ChatGPT is able to pass many graduate-level exams and write convincing essays, demolishing standard modes of assessment at a time when educators are already under significant strain.”
It also notes that generative AI is trained using copyright material which raises intellectual property concerns, but it does not propose any new regulations to counter this specific concern. Instead it encourages government to promote UK companies which are developing AI technologies and to embrace it as a tool:
“As a cognitive assistant … [generative AI and decision intelligence technology] … will be able to arm every civil servant and frontline deliverer of public services with syntheses of complex data, enabling them to make more informed choices without the need for technical expertise or access to privileged data sets.”
Interestingly, Elon Musk – one of the founders of OpenAI which created ChatGPT – is more cautious about generative AI. Musk, along with 1,000 artificial intelligence experts, researchers and backers, has called for a six month moratorium on “the training of AI systems more powerful than GPT-4 [the software behind ChatGPT]”. Furthermore, Italy has recently banned ChatGPT over privacy concerns, and is planning to investigate whether the software breaches the General Data Protection Regulation (GDPR).
We recently discussed here the implications of ChatGPT for the legal world.
Key takeaways for lawyers
Although the Blair–Hague report is merely an independent piece of research, the august reputation of its authors is likely to focus minds of relevant government ministers, and its suggestions may well result in regulatory reforms, alongside legislation which might ensue from the recent AI white paper. Lawyers advising clients in the tech space, should keep an eye on developments in the following areas:
- Data protection – in light of Italy’s move, other EU countries may seek to ban ChatGPT over GDPR concerns.
- Copyright – it’s still unclear whether generative AI tools are potentially infringing copyright and other forms of intellectual property. There is some ongoing litigation (eg Getty Images) which could result in a landmark judgment.
- Driverless cars – although moves towards allowing autonomous vehicles on roads appear to have stalled, a new trial of driverless buses in Scotland shows that strides are still being made in this area.
Photo shared by unknown via WallpaperFlare.