What’s New? December 2025

Cybersecurity regulations

Possibly in response to the bailout of Jaguar Land Rover (JLR) which suffered a debilitating cyberattack earlier this year, the government has progressed its Cyber Security and Resilience Bill. It will mandate critical national infrastructure providers –  such as hospitals, energy and water suppliers – with ensuring a minimum level of standards in relation to cybersecurity policies and procedures. Notably, the Bill, introduced by Science, Innovation, and Technology Secretary Liz Kendall, also covers managed IT service providers and data centres which serve the designated providers, and builds upon the Cyber Governance Code of Practice.

A related, albeit completely separate, Private Members’ Bill sponsored by Conservative MP Bradley Thomas – the Cyber Extortion and Ransomware (Reporting) Bill – requires the reporting of extortion and ransomware cyberattacks for certain organisations. The Bill, which follows a recent consultation on this very topic, would “mandate any British company registered under the Companies Act 2006 that has an annual turnover above £25 million or is responsible for critical national infrastructure to inform the Government within 72 hours of becoming victim to a cyber-extortion or ransomware attack, with a further report being required if any payment is made by the company or a third party on its behalf, within 72 hours of the transaction taking place.”

Facial recognition and Digital ID

A consultation on the legal framework for using facial recognition in law enforcement has been launched by the Home Office. So far, according to the BBC, only eight police forces across England and Wales have been using facial recognition technology, since 2017, and concerns were raised about additional live facial recognition (LFR) vans being rolled out earlier this year. This public consultation will feed into the construction of a legal framework to help “ensure law enforcement can properly harness the power of this technology whilst maintaining public confidence over the long term.” Commenting, Ruth Ehrlich, head of policy at Liberty, welcomed “the fact that there is going to be a debate on how facial recognition technology is regulated and used across police forces” but was disappointed that “that this comes alongside a government commitment to ramp up its use before we’ve understood its dangers.”

Concerns about the prospect of the UK veering towards Orwellian state control reminiscent of the Chinese social credit system have also been raised in relation to the news that Keir Starmer is trying to reprise Tony Blair’s failed attempt to introduce digital identity cards. It is being touted as a way for citizens to have easier access to online public services and streamline identity checks, as well as reduce illegal immigration and working, but the Liberal Democrats have warned that it could simply “add to our tax bills and bureaucracy, whilst doing next to nothing to tackle channel crossings”.

Online safety

In a world first, Australia has implemented a full social media ban for under-16s, which came into effect on 10 December 2025. It is understood that over a million accounts across Facebook, Instagram, TikTok and Snapchat will be closed – but Elon Musk’s X is reportedly resisting the ban. Although concerns have been raised about the effectiveness of age verification checks, just as they were with the Online Safety Act (OSA), it will undoubtedly have a significant impact on society and industry, considering that children and young people tend to be the heaviest users of social media platforms. Silicon Valley will be watching closely in case other countries decide to implement similar bans in their jurisdictions once they have had a chance to evaluate the results of the groundbreaking Australian move. It may need to look particularly closely at the UK and the EU, since Ofcom, the European Commission, and Australia’s eSafety Commissioner, recently issued a joint communication pledging to work together to advance child safety on digital platforms.

The Crime and Policing Bill, currently making its way through Parliament, is due to widen the ambit of priority offences in relation to violent pornography under the OSA. The Bill will also include an amendment which will allow AI developers and child protection organisations to test safeguards put in place by AI companies which aim to prevent the creation of illegal and extreme pornography – to ensure they actually work. Meanwhile, a Statutory Instrument is expected to make “cyberflashing” another priority offence under the OSA.

As we discussed in the June 2025 edition of What’s New, the company behind Wikipedia sought to challenge its designation under the OSA as  “category 1” platform due to concerns that some contributors to the online encyclopaedia could feel less secure. However, it lost its case at the High Court and has decided not to appeal.

Finally on the issue of online safety, it’s worth noting that the UK recently signed the UN Convention against Cybercrime. Also known as the Hanoi Convention, it’s a global treaty which aims to facilitate international cooperation in the enforcement of various cybercrime laws.

European Union

The European Commission has opened an antitrust investigation which will explore whether Google has broken EU competition law through its use of AI. In particular, it will consider how the search engine is using the content of web publishers in its “AI Overviews” (which appear at the top of most search results pages) – and crucially whether users are not clicking through to the actual sources of information, and therefore remaining within the Google eco-system at the expense of third party web publishers. This comes in the wake of several reports about media organisations losing significant advertising income as a result of decreased traffic following the introduction of AI overviews.

The European Data Protection Supervisor (EDPS) is also taking steps to mitigate some of the negative impacts of the AI surge, by publishing new guidance for risk management of AI systems. Its focus is on the management of personal data by European Union Institutions, Bodies, Offices and Agencies (EUIs) which are using AI tools.

The EU Data Act came into application on 12 September 2025. The aim of the legislation is to provide users of connected “smart” devices – which form part of the Internet of Things (IoT) – with more control over their personal data which is processed by these products. This piece of legislation forms part of the broader package of rules known as the Digital Omnibus, designed to streamline rules on AI, cybersecurity and data.

Data protection

Blockchain – All the rage in the mid 2010s, the blockchain is rarely spoken about today in legal tech circles, but its core technology is nevertheless being increasingly implemented in various applications related to the financial sector. The ICO recently ran a consultation on Distributed Ledger Technologies (DLT) which is the broader category under which Blockchain sits, to help prepare guidance on dealing with data protection issues arising from the new technology.

No proof required – The Court of Appeal has ruled, in the case of Farley and others v Paymaster (1836) Limited, that individuals claiming a breach of the GDPR do not need to show proof of any actual disclosure of their personal information in order for their claims to succeed.

Schrems III? – The legal campaign of Max Schrems famously led to the invalidation of the first two U.S.-EU Safe Harbor agreements; his victories came to be known as Schrems I and Schrems II. Now French MP Philippe Latombe has taken up the mantle and challenged the latest incarnation of the agreement, known as the EU-U.S. Data Privacy Framework (DPF). Although his initial case was dismissed by the General Court of the European Union, an appeal could now be heard by the European Court of Justice.

AI roundup

Getty v Stability AI – Getty Images has lost its core claims that Stable Diffusion unlawfully scraped millions of images from its websites and used them to train and develop its AI tools.

GEMA vs. OpenAI – A German court has found that the ability of Large Language Models (LLMs) to memorise copyright protected works essentially breaches copyright law.

Inquiry on AI and copyright – A House of Lords Committee has opened an inquiry on AI and copyright, which comes in the wake of a failed attempt to include a clause in the Data (Use and Access) Bill to provide more protections to copyright holders from AI training models.

AI Growth Lab – The Technology Secretary has announced plans to create “AI Growth Labs“ which would enable AI developers to create various AI products in “sandbox” environments which are exempt from certain regulations.

Ethics – The ethical aspects of AI development are generating discussion amongst legal experts, for example in terms of AI deployment in the workplace and vis-a-vis human rights.

Planning – Despite the efforts of government to circumnavigate nimbyism in its house building programme, new AI tools are actually making it easier for individuals to make objections to planning consultations.

LinkedIn – Both the Dutch privacy regulator and its Irish counterpart have raised concerns about the Microsoft owned career networking platform including user data for training its AI.

Chatbots – Technology Secretary Liz Kendall has indicated that the OSA may be extended to mitigate the risks posed by AI chatbots.

Political deepfakes – After an AI video was generated which falsely depicted Irish President Catherine Connolly claiming to quit the Presidential race, a complaint has been lodged with the Electoral Commission.

Across the pond, Donald Trump has signed an executive order which essentially seeks to prevent states from enforcing their own AI regulations (except those which relate to child safety), purportedly to “ensure that the United States wins the AI race”.

Further reading

New cyber obligations for tech suppliers and data centres as UK ramps up cyber security scrutiny – Pinsent Masons

Ransomware: The changing UK enforcement environment – Clifford Chance

EU-US Data Transfers: Time to prepare for more trouble to come – noyb (Max Schrems’ organisation)

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo by Karine Germain on Unsplash, rotated and cropped.