What’s New? March 2026

Online safety roundup

The social media ban for under 16s which came into force in Australia at the tail end of 2025 appears to have opened the floodgates to similar regulations being planned across the world, including in the UK (although the initial Lords proposal has been voted down by MPs). Following are some of a number of recent developments in the UK concerning the safety of children online.

A Consultation on children’s social media use and guidance issued to schools to tighten up mobile phone bans, with an expectation that schools are “phone-free by default” – a commitment backed up by Ofsted inspections. The consultation will also include “restrictions on children’s use of AI chatbots [see also Baroness Kidron’s amendment to the Crime and Policing Bill re chatbot regulation], as well as options to age restrict or limit children’s VPN use where it undermines safety protections and changing the age of digital consent.”

The Publication of research commissioned by ​​the Department for Science, Innovation and Technology (DSIT) on the “impact of smartphones and social media on children and young people”.

A proposed amendment (94A) to the Children’s Wellbeing and Schools Bill by the House of Lords [column 307 in Hansard] which would (i) raise the age limit for access to social media to 16, (ii) require social media companies to put in place highly effective age assurance, and (iii) direct the Chief Medical Officer to prepare and publish advice to parents and carers on the use of social media by children.

Ofcom has issued a call for evidence in relation to its statutory report “on content that is harmful to children” – the first of which is due in October 2026. It has also written to Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube, setting them a deadline of 30 April 2026 to explain how they will enforce their minimum age rules.

Ofcom has also started to issue fines and carry out investigations into adult entertainment platforms that have failed to comply with the age verification duties under the Online Safety Act which came into effect in July 2025, notably a £1.35 million fine for one provider called “8579 LLC” and another £1 million for “AVS Group Ltd”. In addition to the seven figure fines in respect of insufficient age verification, additional fines have been levied for the failure of both companies to respond to Ofcom’s information requests. But dwarfing these amounts is a £14.47 million fine issued to Reddit, again for failures to implement sufficient age verification methods.

The government is also tabling an amendment to the Crime and Policing Bill to “require chatbots not currently in scope of the Online Safety Act to protect their users from illegal content” – which is essentially a way of tackling the Grok “undressing” deepfake AI controversy. Another amendment to this Bill refers to “measures around preservation of child social media data” which will force social media companies to preserve the data contained within children’s social media accounts within five days of their death.

AI roundup

SPUR – A UK consortium of legacy media outlets, comprising the Financial Times, the Guardian, the Daily Telegraph, the BBC and Sky News, has been formed, ostensibly to lobby for AI governance in the realm of journalistic content. The group, named Standards for Publisher Usage Rights coalition (SPUR), has written an open letter calling for the establishment of “shared technical standards and responsible licensing frameworks that ensure AI developers can access high quality, reliable journalism in legitimate, responsible and convenient ways, while guaranteeing that publishers retain practical control of their content and receive fair value when it is used.”

Agentic AI – The Information Commissioner’s Office (ICO) has published a report on the concept of agentic AI. In particular it examines the prospect of a growing adoption of AI shopping agents, which are essentially AI assistants that are able to make automatic purchasing decisions on behalf of a user. Commenting, William Malcolm, Executive Director of Regulatory Risk and Innovation, says that “the public needs assurances their personal information is secure and well managed before placing their trust in agentic systems”. Meanwhile, the Competition and Markets Authority (CMA) has issued guidance on the interface of consumer law with AI agents (see this Lewis Silkin blog for an overview).

CJC AI disclosure – The Civil Justice Council (CJC) has launched a consultation looking at the use of AI in the preparation of court documents. In particular it asks whether there is a need for procedural rules (including Practice Directions) to govern the use of AI by legal representatives in preparing court documents.

UKJT AI consultation – The UK Jurisdiction Taskforce (UKJT) has published a consultation paper which seeks to provide some clarity on questions of legal liability for AI harms, specifically considering: “in what circumstances, and on what legal bases, English common law will impose liability for loss that results from the use of AI.” See this Bird & Bird blog for expert comment.

AI patents re-think – Back in 2024, in the case of Comptroller-General of Patents, Designs and Trade Marks v Emotional Perception AI Ltd [2024] EWCA Civ 825, the Court of Appeal held that an invention involving an artificial neural network was covered by the computer program “as such” exclusions under s.1(2) of the Patent Act 1977 (which essentially prevent computer programs from being patentable unless they provide a new “technical solution” to a “technical problem”). The Supreme Court has now said that the UK Intellectual Property Office (UKIPO) should reconsider the patent application, potentially paving the way for AI creations to be patentable. See this Pinsent Masons blog for further discussion.

SRA to investigate solicitors use of AI – Judge Fiona Lindsley issued a fresh warning about solicitors using AI, after two immigration solicitors were found to have presented false case citations. This follows the widely publicised comments of Dame Victoria Sharp who raised similar concerns in 2025. Aside from the hallucinated case citations, Judge Lindsley also warned that uploading client documents into AI tools “is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege”. For an in depth discussion of issues related to legal privilege and AI under English law, see this post from Graeme Johnston.

Copyright and AI consultation – The government is due to publish a report on the use of copyright works in the development of AI systems by 18 March 2026, as well as an economic impact assessment. According to the FT, government plans to provide a copyright exception for AI companies training their LLMs, are being “kicked down the road”. Separately, the Communications and Digital Committee in the House of Lords has urged the government to drop these plans altogether [see full report]. It’s also worth noting the piloting of a new “Creative Content Exchange” with regards to potential government policy attempting to balance the rights of copyright holders with AI companies.

AI Omnibus – The EU published a “Digital Omnibus“ proposal towards the end of 2025, with a view to simplifying some of EU legislation dealing with various challenges of the digital age, including the EU’s Data Act, GDPR and AI Act. This blog from White & Case examines its scope and potential impact.

ChatGPT sued for “practising law” – a federal court in Chicago is considering an accusation that OpenAI violated Illinois’ unauthorized practice of law statute, specifically in relation to encouragement being given by ChatGPT to a litigant to pursue their case.

Other developments

New data laws – The first provisions of the Data (Use and Access) Act 2025 came into effect on 5 February 2026, including enhanced data protection for children, streamlining processing of personal data by security services, clarifying rules on automated decision making and widening exemptions for cookie consent. For a broader discussion see this blog by Hogan Lovells.

Driverless cars – In 2025 Tesla was handed a $243 million fine over a 2019 fatal crash of one of its vehicles in which the autopilot function was found to be 33% responsible for the crash. Tesla has failed in its initial appeal, in which it argued that full blame rests with the human driver rather than its autopilot system, but is expected to appeal again.

Updated subscription rules – New regulations relating to consumer subscription contracts are due to come into force in autumn 2026, under the Digital Markets, Competition and Consumers Act 2024 (DMCA). This will affect online businesses which charge recurring subscription fees.

Virtual theft – The Court of Appeal has ruled that gold pieces in an online computer game constitute “property” for the purposes of the Theft Act 1968. Notably, the judgment in R v Lakeman made reference to the Property (Digital Assets etc) Act 2025 which introduced a new category of personal property to cover digital assets. See this blog from Lewis Silkin for analysis of the case. Meanwhile, games with loot boxes are getting a minimum 16 age rating across Europe.

Data protection – In a case between the ICO and DSG Retail Ltd (Currys) related to a data breach resulting from a cyberattack, the Court of Appeal held that data controllers and processors can be held liable for insufficient measures to protect data, regardless of whether individuals can be identified from exposed data sets. Elsewhere, in Germany, the Federal Court of Justice has ruled that cyberattacks which result in personal data ending up on the dark web can potentially lead to successful mass claims against data controllers and processors.

WhatsApp contracts – the High Court recently ruled that WhatsApp headers did not amount to a signature for the purposes of section 53(1) of the Law of Property Act 1925. However, WhatsApp messages were held to form a valid contract in a separate case involving property recently, so the status of instant messages in contract formation is clearly still in a state of flux.

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo by Rob Wingate on Unsplash.