Author archive

Alex Heshmaty

Alex Heshmaty is a legal copywriter and journalist with a particular interest in legal technology. He runs Legal Words, a legal copywriting agency based in Bristol.

Can technology improve our health and transform healthcare? A whole panoply of tech companies are working on a range of products and services which aim to answer these questions in the affirmative. The burgeoning industry which has been dubbed “medtech” has already led to some fascinating (and controversial) partnerships, perhaps most notably involving Google Deepmind being granted access to NHS patient data. It has been estimated that the medical device and technology sector could be worth around $500 billion to the global economy by 2021. But despite the potential for healthy growth, there are also many concerns associated with medtech, not least in terms of data protection. These are discussed below, including a section on how health data is being used in the fight against coronavirus.

The government recently indicated a willingness to diverge from EU regulations post-Brexit. Perhaps one of the more significant moves in this direction is the announcement by Universities and Science Minister Chris Skidmore that the UK will not implement the controversial EU Copyright Directive.

It was recently reported that the European Commission (EC) was considering a temporary ban on the use of facial recognition technology in public places. A draft white paper on artificial intelligence had reportedly stated that the “use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (eg three to five years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed”.

“I read it on the internet” has become a phrase which often generates mockery and epitomises gullibility or naivety about the online world. In the 1950s science fiction writer Theodore Sturgeon proclaimed that “ninety percent of everything is crud” which came to be known as Sturgeon’s Law. One can only speculate as to how Sturgeon may have adapted the percentage value of his law had he lived in the age of cat memes and online trolls.

As part of an investigation by the European Commission into the effect of data collection practices by Facebook and Google upon competition, news publishers have been sent detailed questionnaires regarding data sharing agreements with Google. In particular, the questionnaires seek details from publishers on ways in which the search engine behemoth uses data collected from their websites to track user activity.

Social media companies have traditionally argued that they are merely internet platforms as opposed to publishers with the ensuing editorial responsibilities (despite the odd court case where it has been to their advantage to hold themselves out as publishers). But in the face of increasing public controversy about malicious content plaguing social media sites, the Silicon Valley giants are being forced to take action to minimise reputation damage. Facebook claimed that it had removed 1.5 million copies of the video of the New Zealand terrorist attack in the first 24 hours alone – which provides some idea of the scale of the challenge involved with content moderation.

Data misuse is often discussed alongside cybersecurity, within the overall context of data protection; but it is important to make the distinction between data which has been obtained legitimately but misused and data which has been collected illegally (eg without consent) or stolen (via computer hacking).

Data theft generally involves a cyberattack or harvesting of data by other means where data subjects are unaware of the collection or modification of their data; this type of cybercrime is largely covered by the Computer Misuse Act. Even where the data is provided knowingly and willingly, its collection may still be illegal if it breaches the Data Protection Act (DPA) or General Data Protection Regulation (GDPR).

The term “data misuse” is normally applied to personal data which has been initially willingly and legitimately provided by customers to a company, but is later used (either by the company or a third party) for purposes which are outside the scope of legitimate reasons for the initial data collection. This is what we will be discussing in this article.

The rise of the gig economy and zero hours contracts – often facilitated by the internet and apps such as Uber, Deliveroo and Limber – has been the subject of vigorous debate over recent years. Governments across the world have been grappling with the implications for employment law and wider society, balancing the boost to the economy and reduction in unemployment with restrictive practices, insecurity of work, low wages and imbalance of power.

Much has been written about the problems surrounding permanence of data once it has been uploaded to the internet – whether it’s a misjudged Twitter comment by a politician from 10 years ago, or a risqué photo from bacchanalian university days which emerges when someone is looking for a job. The difficulty of erasure impinges on a broader philosophical principle – the right to be forgotten – but this term has been most commonly used to describe the stickiness of search results within Google. The specific legal question often asked in this regard is: does Google need to delete search results upon request by individuals?

Back in 2006, Sheffield mathematician Clive Humby declared “data is the new oil” after reaping the benefits of helping to set up a supermarket loyalty card scheme. This was the same year that Facebook went mainstream, accelerating the pace of data harvesting and spawning an entire industry devoted to the collection, analysis and monetisation of large sets of personal data. Although many concerns were raised over the following years regarding the potential dangers of the big data revolution which ensued, arguably it wasn’t until the Cambridge Analytica scandal broke in 2018 that the public – and their parliamentary representatives – began to grasp the true gravity of the situation.

Governments around the world have grappled with the challenge of sufficiently taxing international companies – particularly peripatetic tech giants – which aggressively pursue policies of (perfectly legal) tax avoidance. One of the main reasons that so many Silicon Valley icons decide to base their European operations in Dublin (including Google and Apple) is due to the low rate of corporation tax in Ireland (compared to most other EU countries) – and the ability to further minimise their taxes by taking advantage of tricks such as profit shifting. The G7 have been discussing the best way to implement a fair method of international taxation – but, in the meantime, France has decided to go ahead and impose its own levy, to the consternation of the US.

In the wake of growing data protection concerns around the turn of the century, a framework dubbed “Safe Harbor” was agreed between the EU and the US in 2000, which essentially permitted transatlantic free-flow of personal data.

Towards the end of 2015, as a result of one of several legal challenges brought by prolific Austrian privacy campaigner Max Schrems, the European Court of Justice declared the Safe Harbor framework invalid on the grounds that it did not provide adequate safeguards for personal data.