
AI hallucinated case law
In the recent case of Ayinde v Haringey [2025] EWHC 1383 (Admin), Dame Victoria Sharp, President of the King’s Bench Division of the High Court, issued a stark warning to lawyers that they have a “professional duty” to ensure that fictitious case citations “hallucinated” by generative AI (GenAI) tools do not creep into litigation materials. A separate article in this edition of the newsletter contains more information on her ruling, and a broader discussion of the risks of GenAI being used for legal research.
Online safety
Arguably the most significant compliance deadline of the Online Safety Act (OSA) arrived on 25th July 2025: mandatory age compliance checks (with some hefty potential penalties for non-compliance, of up to the greater of 10 per cent of ‘qualifying worldwide revenue’ or £18 million). Although the media focused on the immediate effect on the pornography industry, lots of websites and online services were impacted, including Spotify, Reddit and potentially Wikipedia.
Unsurprisingly, the restrictions, enforced by rather cumbersome geo-blocking technologies, led to a surge in VPN downloads, prompting criticism of the government approach and even leading to a spat between former technology secretary Peter Kyle and Nigel Farage. However, the EU is also planning to introduce a similar regime in 2026, with several European countries already testing age verification, and it’s possible that additional legislation could be brought forward to plug the VPN loophole.
In a potential expansion of the OSA, it has been reported that the government is considering introducing time limits for social media use amongst children. But former Meta global affairs chief and one-time Lib Dem leader, Nick Clegg, who has recently returned to the UK and called for tighter age verification rules, previously said that many social media apps already include parental controls which can restrict usage, but most parents fail to use them.
Other recent developments around the issue of online safety include concerns being raised with Ofcom in response to plans announced by Meta to use AI to carry out up to 90 per cent of all risk assessments. Meanwhile, Meta says it will be taking steps to prevent its AI chatbots talking to teenagers about suicide, self-harm and eating disorders.
Recent legislation
Data (Use and Access) Act 2025 – This finally received Royal Assent on 19 June 2025. Although this piece of legislation became notorious for a proposed amendment related to LLM training and copyright resulting in protracted Parliamentary ping-pong, its core intention is to improve and facilitate the sharing of data between citizens and public services.
Property (Digital Assets etc) Bill – This comes in the wake of a report by the Law Commission which proposed a third category of personal property (in addition to the two traditional categories: “things in action” and “things in possession”) to cover digital assets such as cryptocurrency, non-fungible tokens (NFTs) and carbon credits. In its current form, all the Bill does is to create said third category, leaving it up to the courts to decide exactly what digital assets it will encompass. In the second reading, Sarah Sackman, Minister of State in the Ministry of Justice, said: “Crucially, the Bill does not attempt rigidly to define every type of digital asset. Instead, as I have said, it allows the common law to evolve, giving our courts the flexibility to adapt to technologies that have not yet even been imagined.”
Artificial Intelligence (Regulation) Bill – This private member’s bill, which has been reintroduced by Lord Holmes after failing to make it through to wash-up before dissolution of the previous Parliament, essentially proposes a range of AI regulatory principles, along with the creation of an “AI Authority” to oversee enforcement.
Social Media (Access to Accounts) Bill – Another private member’s bill, this aims to compel social media companies to provide access for parents to the online accounts of their deceased children. It has been dubbed “Jools’ Law“ after 14-year-old Jools Sweeney who died suddenly in 2022; his mother Ellen Roome suspects his death was a consequence of an online challenge gone wrong, but has been unable to access his social media to investigate.
EU Accessibility Act – This EU directive requires online platforms and e-commerce providers operating in the EU to meet functional accessibility requirements for websites and emails.
AI litigation
There have been a barrage of cases lodged against AI companies, accusing them of intellectual property theft as a result of training their LLMs on copyright material found on the internet. Recent litigation includes:
Anthropic – Three authors, including Andrea Bartz, accused the company of training its Claude LLM on millions of pirated books. Although the judge ruled that the training process itself was protected by the “fair use” doctrine, a further trial will need to consider the issue of pirated copies being used.
Meta – 13 authors, including Sarah Silverman, were also defeated by the fair use doctrine, but the judge importantly noted that the plaintiffs could have been successful if they had made the right arguments.
Stability AI – Getty Images lodged cases against the text-to-image GenAI tool in both the UK and the US, claiming that it has unlawfully scraped millions of images from its websites and used them to train and develop its AI tools, although it has dropped some elements of its claims.
Midjourney – Film studios Disney and Universal are suing the image based AI platform, for allegedly making “innumerable” copies of movie characters, including Darth Vader.
Perplexity – The BBC has accused the AI search engine of hoovering up its website content, and has threatened legal action. But the AI company has hit back, accusing the broadcaster of a “fundamental misunderstanding of technology, the internet and intellectual property law.”
For more information on the issue of copyright and AI, read The state of copyright and AI in 2025, which also covers the Anthropic and Meta cases in more depth.
Other AI developments
Yes, Computer
In our June edition of What’s New, we mentioned how a government made AI tool called “Consult” (part of a suite called Humphrey, named after a character from the 1980s TV sitcom Yes, Minister) is being trialled as a means of speeding up the consultation process. It seems that this is just the tip of the iceberg, with plans being announced to train the entire civil service to use AI tools starting this autumn. However, the use of AI by the public sector is likely to be increasingly called into contention, with a recent Freedom of Information case (Thomas Elsbury v The Information Commissioner) setting a precedent for greater transparency of AI use by government bodies.
Miseducation
A Guardian investigation has discovered that thousands of university students have been caught using ChatGPT to write their essays, potentially just a tiny fraction of the overall numbers. A government spokesperson said in response that “integrating AI into teaching, learning and assessment will require careful consideration”, but meanwhile the Department for Education has already given the green light for teachers to use AI to speed up marking and write reports.
Elon effect
Concerns have been expressed about the government becoming too close to tech companies and providing them with regulatory leniency in an effort to boost investment. A recent “memorandum of understanding” between the UK government and OpenAI has been criticised as being “very thin on detail”, and this comes hot on the heels of a similarly controversial agreement with Google. But Chancellor Rachel Reeves argues that the government’s approach to embrace AI (eg through the AI Opportunities Action Plan) is already paying dividends, pointing to the recent announcement of a £5 billion investment in UK AI by Alphabet. The government has also managed to secure a “tech prosperity deal” which PM Keir Starmer says will create highly skilled jobs. Meanwhile, the creative industries have asked for more regulation of AI, with the BFI calling for an opt-in regime which would force AI companies to seek permission to train their models on copyright film and TV scripts.
Acquisitive disruption
In what is reportedly the first instance of the acquisition of a law firm by an AI company, Lawhive recently announced it purchased Woodstock Legal Services after successfully raising £43 million in external investment. Commenting, Pierre Proner, chief executive and co-founder of Lawhive, said, “We believe that Lawhive’s vertically integrated model of a regulated law firm and tech platform for lawyers to work alongside AI colleagues, creates better outcomes for everyone.”
Further reading
Class-Wide Relief: The Sleeping Bear of AI Litigation Is Starting to Wake Up – Intellectual Property & Technology Law Journal
Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.
Photo by Mon Esprit on Unsplash.