Laws of robotics

A new humanoid robot called Neo, designed to help out with household chores and available to order for $20,000, has received a lot of press attention recently. Although delivery is expected as early as 2026, the current prototype is not fully automated, relies on a human “teleoperator” (coincidentally called Turing) wearing a VR headset, and takes five minutes to load a dishwasher with just three items. Initial adopters will need to authorise a teleoperator to control Neo in their homes to perform various tasks which have not been automated at the point of delivery. Whether or not this latest attempt to bring a humanoid robot to market will succeed, it’s worth considering the current state of laws covering robotics, as this is almost certainly an area which is likely to take off (perhaps quite literally) according to Nvidia CEO Jensen Huang.

What is a robot?

The OED defines “robot” as “a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer”. The word derives from a Czech term essentially meaning “forced labour”.

Although robots are commonly thought of as humanoid androids (such as Neo), the real-world application of robotics has, thus far, largely been in factories where they comprise various types of mechanical arms and other equipment used to automate and speed up the manufacturing process. Robots are fairly common on the modern battlefield in the form of drones, and in hospitals robotic arms are sometimes used to assist surgeons and enhance precision. The only relatively common domestic robot so far is the automated vacuum cleaner, but the next huge leap for consumer robotics could well be driverless cars.

As robots become more prevalent in everyday life, it’s inevitable that the law will need to develop, but it’s likely that various pieces of legislation will need to focus on specific types of robot, as otherwise the ambit would be too wide.

The philosophical laws of robotics

Science fiction writer Isaac Asimov famously coined the “three laws of robotics“ in 1942, to wit:

  1. “A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”

These rules have been referenced in a vast array of science fiction novels and films. However, the fact that robots are used in modern warfare means that these “laws” are very much in the realm of science fiction – despite their sound rational and moralistic basis.

Since artificial intelligence (AI) has been making such huge strides over the last few years, the law and ethics have been focusing more on how to regulate AI rather than robotics. However, most modern robots will be using some form of AI, so any legislation in this area will likely affect both areas. For example, the current debate about whether AI could be considered a “legal person” would equally translate to robots which incorporate AI.

Legislation applying to robotics

Employment law

There are two sets of employment law regulations which cover the use of robotics in the workplace, albeit under the general category of “machinery”:

In essence, the regulations stipulate that any machinery should be safe, and that relevant training needs to be provided for employees, to comply with the Health and Safety at Work etc. Act 1974. There are also a range of standards relating to the use of robots in the workplace (eg BS EN ISO 10218-1:2011, “Robots and robotic devices – Safety requirements for industrial robots).

However, the Health and Safety Executive (HSE) has identified several areas of law which require development, noting that “the growing interaction between people and robotics & autonomous systems (RAS) is a crucial risk factor that requires further consideration in standards and guidance.” Meanwhile, the HSE is formulating rules to mitigate the risks of AI use in the workplace.

Product liability

Although the greatest danger currently posed by a robot vacuum cleaner may be a faulty battery, autonomous vehicles come with extremely serious safety concerns. But UK consumers are still largely reliant on the Consumer Protection Act 1987 for household robots which cause harm. There are also various relevant standards, such as BSI “Ethical design and application of robots and robotic systems“ and ISO “Robots and robotic devices – Safety requirements for personal care robots“.

The EU is currently implementing its new Product Liability Directive which extends consumer protections to cover software and AI, which will likely also be relied upon in relation to household robotics.

Data protection

The software which drives most consumer robots often means they fall into the category of the Internet of Things (IoT), with all the consequent issues surrounding personal data. Fears are sometimes raised about toy robots in particular commoditising the data of children – not without good foundation according to a recent case. It may be necessary to expand new proposed laws on cyber defences to cover consumer robots, not only to strengthen data protection, but also to avoid the alarming prospect of autonomous vehicles being hacked.

Criminal

The use of drones to smuggle illegal contraband into prisons has been receiving ample press attention of late; the government has even launched an “innovation challenge“ seeking the help of inventors who can come up with a solution. Although criminal laws have not generally been adapted specifically to cater for drones and other robots, the Civil Aviation Authority (CAA) has drawn up a Drone Code, some of which sets out legal requirements for drone operators.

Case law

There have been several cases of humans being injured by robotic equipment in factories, going back to 1979 when Robert Williams was struck and killed by the arm of a robotic transfer vehicle whilst working at the Ford Motor Company’s Michigan Casting Center. Although a range of safety measures have been implemented since then, the manufacturing industry and warehouses are still risky environments, and a South Korean man was killed by a robotic arm as recently as 2023.

Other cases involving drones and driverless cars have generally focused on the criminal or civil liability of various parties, including operators, safety drivers and the manufacturers of software and hardware, for example:

  • 2023 – After an Uber self-driving car killed a pedestrian in 2018, the safety driver pleaded guilty to endangerment and was sentenced to three years of supervised probation, but the company itself did not face any criminal charges.
  • 2024 – An amateur photographer pleaded guilty to recklessly or negligently causing or permitting an aircraft to endanger a person or property, in breach of the Air Navigation Order 2016, after his drone crashed into the stage of a Fatboy Slim concert in Brighton.
  • 2025 – A drug smuggler was sent to prison after being convicted of using a drone to try and smuggle drugs into prison.
  • 2025 – Tesla was found partly liable for a 2019 crash in which one of their autonomous vehicles killed a pedestrian and severely injured another.

Although there are no reports of humanoid robots causing injuries to humans, there have been some dangerous situations.

Are new regulations required?

Successive British governments have been discussing legislation for driverless cars, which we wrote about in this newsletter all the way back in 2016. A regulatory framework was still being developed when we published an update in 2021, and an Automated Vehicles (AV) Act came into force in 2024, accompanied by an enthusiastic press release which claimed that “Self-driving vehicles could be on British roads by 2026.” At the tail end of 2025 it still seems unlikely that autonomous vehicles without a safety driver will be allowed for the foreseeable future, at least until the second half of 2027.

Arguably there is already enough regulation covering the operation of drones, with the main challenge being enforcing the law, particularly in relation to prison smuggling. As for industrial robotics, this area of law is probably the most mature; health and safety regulations have been developed over several decades in the wake of incidents involving robotic machinery in factories. However, as warehouses increasingly comprise a mixture of both human workers and robots, it’s likely that further regulation will be necessary to avoid accidents.

The prospect of humanoid robots operating in residential settings would likely be covered initially by a combination of product liability and data protection laws. Human teleoperators may be held responsible for any incidents, similarly to safety drivers, but once AI takes over then it’s likely full liability will be placed upon the software providers and hardware manufacturers. The law may need to be developed to cater for various scenarios to ensure there is more clarity for insurance purposes.

The international perspective

The fragmentory approach to robotics law is fairly consistent around the world, with most countries relying on a combination of existing legislation. However, some jurisdictions are certainly more risk averse when it comes to driverless cars and AI, and these attitudes will likely steer their development of any robotics regulation. For example, some US states already allow autonomous taxis to operate even without safety drivers, and AI is generally less regulated in America compared to the EU.

The EU’s Product Liability Directive will be worth following, as this may well be used heavily in relation to consumer robotics developments. China was one of the first countries to introduce AI regulations, and it also manufactures much of the world’s robotics equipment, so it will be interesting to see if there’s a clash between standards in its factories and consumer laws in the jurisdictions of its customers.

Final thoughts

Most of the regulations which are currently applied to robotics were not specifically designed with robots in mind. As robots increase their presence in the workplace, on roads, in skies, and even in homes, it will be necessary for governments globally to start developing new laws – just as they are currently doing with AI – to grapple with this brave new world.

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo, ICAPlants: Float glass unloading, CC BY-SA 3.0, via Wikimedia Commons.