What’s New? June 2025

First SRA regulated AI law firm

At the beginning of May, the Solicitors Regulation Authority (SRA) made the following announcement in a press release:

“We have authorised the first law firm providing legal services through artificial intelligence (AI). While many firms are already using AI to support and deliver a range of back-office and public-facing services, Garfield.Law Ltd is the first purely AI-based firm we have authorised to provide regulated legal services in England and Wales.”

Despite all the ensuing proclamations about the beginning of the end for traditional law firms, the current service offering of this “AI law firm” is not particularly groundbreaking – essentially a tool to help small businesses collect debts. The three primary components of the service comprise:

  1. A chatbot (likely just a customised fork of ChatGPT or Gemini) which answers questions about debt collection.
  2. Creation of Letter Before Action, via the chatbot, with the option to upload relevant documents.
  3. Generation of Claim Form and Particulars of Claim, again via the chatbot.

Although this service may well build up a customer base (especially with the Letter Before Action priced at the modest sum of £7.50), it’s basically competing against the plethora of established legal template document providers which have a far wider selection of templates.

It will be interesting to see if this move by the SRA proves as significant as their blessing of Alternative Business Structures (ABSs) back in the days of “Tesco law” predictions.

Related to the chatbot element of the product, although it will likely be programmed to avoid crossing the line into providing legal advice, it’s worth noting a recent study from Southampton University which found that “Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM.”

Online Safety Act: updates and related news

Wikipedia legal challenge

The Wikimedia Foundation — the company behind online encyclopaedia Wikipedia – is seeking a Judicial Review against certain provisions of the Online Safety Act, which its lawyers have described as “flawed legislation”. The primary complaint is that Wikipedia is likely to be designated under the Act as a “Category 1” platform (due to the fact that anyone can edit the pages), meaning that it will be obliged to verify the identities of all its volunteer contributors, potentially resulting in the loss of many editors – particularly those in living in oppressive regimes. Rebecca MacKinnon, the Wikimedia Foundation’s vice president of global advocacy, warns that: “We would be forced to collect data about our contributors, and that would compromise their privacy and safety, and what that means is that people would feel less safe as contributors”.

First Ofcom Investigation

Ofcom has launched its first investigation into online harms in its regulatory role under the Online Safety Act. It is looking into an online suicide discussion forum which, according to the BBC, has been linked to 50 deaths in the UK alone. Specifically, it is considering the alleged failure of the provider to:

  • adequately respond to a statutory information request;
  • complete and keep a record of a suitable and sufficient illegal content risk assessment; and
  • comply with the safety duties about illegal content, the duties relating to content reporting and duties about complaints procedures, which apply in relation to regulated user-to-user services.

EU age verification app

Of relevance to the Online Safety Act is news that the EU will be launching an age verification app in July. The idea is that children will be able to verify their age to access certain platforms without disclosing personal information.

Fake reviews banned

Certain provisions of the Digital Markets, Competition and Consumers Act 2024 (DMCCA) came into force on 6th April 2025, notably the prohibition of fake reviews and hidden fees in online transactions. Commenting, Justin Madders, Minister for Employment Rights, Competition and Markets, said, “From today consumers can confidently make purchases knowing they are protected against fake reviews and dripped pricing.”

The new rules will be implemented and regulated by the Competition and Markets Authority (CMA), which has already issued relevant guidance for businesses. Part of it states that any business which publishes “consumer reviews or consumer review information” must “take reasonable and proportionate steps to prevent and remove fake reviews.”

AI consultation tool

A new AI tool called “Consult”, which has been developed by the government to try and speed up the consultation process, has just been put to the test by the Scottish Government in a consultation seeking views on how to regulate non-surgical cosmetic procedures.

Consult forms part of a suite of AI software being created by the government to “speed up the work of civil servants and cut back time spent on admin, and money spent on contractors”. Commenting on its roll-out, Technology Secretary Peter Kyle said, “No one should be wasting time on something AI can do quicker and better, let alone wasting millions of taxpayer pounds on outsourcing such work to contractors.”

Perhaps the tool will also be used to sift through the 11,500 responses to the copyright and AI consultation (covered in our March 2025 edition of What’s New – as well as the related proposed amendments to the Data (Use and Access) Bill, which are still ping-ponging between the Commons and the Lords)?

LLMs trained on Facebook posts

Meta has been given the green light by the Irish Data Protection Commission (DPC) to train its LLMs on EU user data which is publicly available on its platforms, including Facebook and Instagram.

Although the DPC says it will monitor things, and has requested a further report from Meta to evaluate “efficacy and appropriateness of the measures and safeguards” expected to be compiled by October, the onus is now essentially on individual users to “regularly review their privacy settings and controls so that these continue to reflect their personal preferences.”

AI Arbitration

The Chartered Institute of Arbitrators (Ciarb) has published its Guideline on the Use of AI in Arbitration, which aims to help those involved in arbitration with navigating the role of artificial intelligence. It is split into the following sections:

  1. Benefits and risks of the use of AI in arbitration
  2. General recommendations on the use of AI in an arbitration
  3. Arbitrators’ powers to give directions and make rulings on the use of AI by parties in arbitration
  4. Use of AI in arbitration by arbitrators.

In an insight article on the new Guideline, Dentons notes that, “Of particular interest is the fact that the Guidelines appear to foresee parties (and perhaps arbitrators) using generative AI to analyse the evidentiary record and even to produce documents that will form part of such record.”

Legal Aid cyberattack

The Ministry of Justice revealed that the Legal Aid’s online system was subject to a malicious hack in April 2025, resulting in data stretching back to 2010 being stolen. The compromised data, relating to legal aid applicants, included “contact details and addresses of applicants, their dates of birth, national ID numbers, criminal history, employment status and financial data such as contribution amounts, debts and payments.”

Commenting, Jane Harbottle, Chief Executive Officer of the Legal Aid Agency, said, “Since the discovery of the attack, my team has been working around the clock with the National Cyber Security Centre to bolster the security of our systems so we can safely continue the vital work of the agency.”

This news comes hot on the heels of a recent spate of cyberattacks on retailers, notably M&S and Harrods.

New York Times v OpenAI: if you can’t beat ‘em, join ‘em

As we have previously reported on, the New York Times has been pursuing a lawsuit against OpenAI since 2023, arguing that training ChatGPT on their archive of news articles constituted a breach of copyright. It has now been announced that the newspaper has decided to partner with Amazon, which will allow Jeff Bezos to train his LLMs on its editorial content.

Meanwhile, the litigation against OpenAI is still ongoing, with US District Judge Sidney Stein recently rejecting parts of OpenAI’s (and parent Microsoft) motion to dismiss.

GenAI fake citations

It is pretty widely known that lawyers who have eagerly embraced generative AI tools without implementing guardrails (eg actually checking the output of the LLMs before relying on it for advice and litigation matters) have come a cropper in the courtroom, going all the way back to June 2023 when ChatGPT was still in its infancy. But for some reason, legal professionals keep making the same mistake – perhaps due to pressure from law firm management to achieve the hallowed “efficiency gains” expected (or at least effectively marketed by legal tech companies) from investment in AI tools.

It was recently reported that lawyers defending AI firm Anthropic from a copyright claim lodged by music publishers against its LLM tool Claude, ended up having to apologise to the court for using the very same GenAI software in a submission, resulting in a flawed citation due to so-called digital hallucinations. According to one database, there have been 120 instances of court cases where AI hallucinations have been spotted in various legal documents.

Driverless cars: a reprise?

As reported in this newsletter back in January 2016, Google co-founder Sergey Brin previously predicted that driverless cars would be available for consumers by 2017, and many commentators at the time sounded confident that there would be an autonomous vehicle revolution by 2020. Half way through the new decade, although many cars now have driverless capabilities, no jurisdiction has allowed them to be used en masse on public roads, beyond limited testing and specific exemptions.

But it has now been reported that the Department for Transport issued this statement to the BBC: “We are working quickly and will implement self-driving vehicle legislation in the second half of 2027”. Whether or not this is just another throwaway milestone or a genuine plan to get driverless cars into gear stands to be seen.

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo by Porapak Apichodilok on Pexels.