Is NHS and Google’s data sharing a threat to patient confidentiality or a worthwhile risk?


The news that Google’s DeepMind received a huge amount of personal data from the Royal Free London NHS Trust, was met with alarm and concern by the public and the 1.6 million patients whose confidential information was shared without consent.

But why was this data handed over by the NHS Trust? The Royal Free NHS Trust has been working alongside DeepMind, Google’s artificial intelligence company, in developing an app named Streams aimed at the prevention and treatment of Acute Kidney Injury (AKI). The NHS Trust provided patient information in order for DeepMind to test the app with real-life data.

AKI occurs when sudden injury to the kidneys prevents the organ from working effectively and is initially symptomless and usually only detected through a rise in creatinine levels. As 13-18% of all patients admitted into hospital show signs of AKI, and 30% of those in critical care, it is a condition that is clearly a huge issue.

The severity of the condition and the sheer number of sufferers led to the Royal Free handing over confidential records to DeepMind, a decision that has prompted concern from the media, the public and the patients who may have been affected.

One of the primary concerns is the potential risk posed by private companies having access to confidential data. This is a very unpopular prospect with the public and questions have been raised about what a company could do with this data and how it may impact these patients. A specialist from a medical negligence law firm explains that whilst DeepMind’s app would lead to more efficient diagnosis of AKI and improved patient outcomes, this does not justify the fact that the Royal Free had provided this information without consent.

The seemingly increasing number of data breaches that hit newspaper headlines across the world is also a cause for worry. The prospect of a third party accessing this personal data is very problematic and one of the major factors behind the negative response that has greeted the news that the Royal Free had shared confidential information with DeepMind. Although Google and its subsidiary companies will have data protection methods in place, breaches can, and have, occurred in the past. For example, the personal data of Google employees has previously been accessed whilst user passwords have also been hacked.

Of course, the NHS itself has recently been the victim of a huge data breach. The cyber-attack that took place in early 2017 accessed the information of millions of patients across the UK. With this data breach still fresh in the minds of many, the public response to the decision by the Royal Free to share data with DeepMind has been even more vociferous.

The third major factor in the negative response from both the public and the media has been the concern regarding the relationship between Google and DeepMind. Although DeepMind has explicitly stated that this information has not been shared with Google, this message has not always been fully embraced by the media. As a result, many fear the repercussions of Google potentially having access to the intensely private information of over one million NHS patients. With the sheer amount of confidential data held on Google’s servers, questions have naturally arisen over what impact this may have on the long-term safety and interests of the patients who have been affected.

Understandably there have been significant concerns about this issue, but should the NHS Trust have handed over this data to DeepMind in the first place? And was it illegal to do so without the consent of the patients involved? According to Peter Wainman, an expert in law relating to technology and data, as DeepMind merely acted as a processor of this data, any issues regarding data protection or patient confidentiality remain solely a matter for the Royal Free. More recently, the Information Commissioner (ICO), an independent authority set up to deal with information issues, ruled that the Royal Free failed to comply with the Data Protection Act when providing this information and patients had not been adequately informed that this data would be used in such a way.

Lessons learnt?

Both the Royal Free and DeepMind made several errors of judgment throughout this process. However, since the ICO ruling, there has been improved transparency from both sides as well as willingness for a change of approach on this project.

A future option could be providing anonymous or “de-identified” data which cannot be traced back to patients, as opposed to actual patient data. This is an approach taken by Genomics England, the Department of Health-owned organisation which runs the 100,000 Genomes research project, where an individual’s information is not provided to researchers just the raw figures. Importantly, the individuals in this genome research had consented to allow this unidentifiable data to be used.