Author archive

Alex Heshmaty

Alex Heshmaty is a legal copywriter and journalist with a particular interest in legal technology. He runs Legal Words, a legal copywriting agency based in Bristol.

Data misuse is often discussed alongside cybersecurity, within the overall context of data protection; but it is important to make the distinction between data which has been obtained legitimately but misused and data which has been collected illegally (eg without consent) or stolen (via computer hacking).

Data theft generally involves a cyberattack or harvesting of data by other means where data subjects are unaware of the collection or modification of their data; this type of cybercrime is largely covered by the Computer Misuse Act. Even where the data is provided knowingly and willingly, its collection may still be illegal if it breaches the Data Protection Act (DPA) or General Data Protection Regulation (GDPR).

The term “data misuse” is normally applied to personal data which has been initially willingly and legitimately provided by customers to a company, but is later used (either by the company or a third party) for purposes which are outside the scope of legitimate reasons for the initial data collection. This is what we will be discussing in this article.

The rise of the gig economy and zero hours contracts – often facilitated by the internet and apps such as Uber, Deliveroo and Limber – has been the subject of vigorous debate over recent years. Governments across the world have been grappling with the implications for employment law and wider society, balancing the boost to the economy and reduction in unemployment with restrictive practices, insecurity of work, low wages and imbalance of power.

Much has been written about the problems surrounding permanence of data once it has been uploaded to the internet – whether it’s a misjudged Twitter comment by a politician from 10 years ago, or a risqué photo from bacchanalian university days which emerges when someone is looking for a job. The difficulty of erasure impinges on a broader philosophical principle – the right to be forgotten – but this term has been most commonly used to describe the stickiness of search results within Google. The specific legal question often asked in this regard is: does Google need to delete search results upon request by individuals?

Back in 2006, Sheffield mathematician Clive Humby declared “data is the new oil” after reaping the benefits of helping to set up a supermarket loyalty card scheme. This was the same year that Facebook went mainstream, accelerating the pace of data harvesting and spawning an entire industry devoted to the collection, analysis and monetisation of large sets of personal data. Although many concerns were raised over the following years regarding the potential dangers of the big data revolution which ensued, arguably it wasn’t until the Cambridge Analytica scandal broke in 2018 that the public – and their parliamentary representatives – began to grasp the true gravity of the situation.

Governments around the world have grappled with the challenge of sufficiently taxing international companies – particularly peripatetic tech giants – which aggressively pursue policies of (perfectly legal) tax avoidance. One of the main reasons that so many Silicon Valley icons decide to base their European operations in Dublin (including Google and Apple) is due to the low rate of corporation tax in Ireland (compared to most other EU countries) – and the ability to further minimise their taxes by taking advantage of tricks such as profit shifting. The G7 have been discussing the best way to implement a fair method of international taxation – but, in the meantime, France has decided to go ahead and impose its own levy, to the consternation of the US.

In the wake of growing data protection concerns around the turn of the century, a framework dubbed “Safe Harbor” was agreed between the EU and the US in 2000, which essentially permitted transatlantic free-flow of personal data.

Towards the end of 2015, as a result of one of several legal challenges brought by prolific Austrian privacy campaigner Max Schrems, the European Court of Justice declared the Safe Harbor framework invalid on the grounds that it did not provide adequate safeguards for personal data.

One of the key changes brought about by the General Data Protection Regulation (GDPR), which came into force on 25 May 2018, was a substantial increase in the maximum fines available for data protection breaches, to the higher of €20 million or 4% of global annual turnover. Any breaches which occurred prior to this date were subject to a maximum of £500,000 set by the Data Protection Act 1998 – and this former upper limit was only invoked once, in the case of Facebook and its part in the Cambridge Analytica scandal. Many commentators pointed out that half a million pounds was “chump change” for the likes of tech giants. The same couldn’t be said of the £183 million fine which the Information Commissioner’s Office (ICO) levied on British Airways (BA) less than a year later.

Information overload is defined by Wikipedia as “the difficulty in understanding an issue and effectively making decisions when one has too much information about that issue” – although, ironically, it offers alternative definitions based on multiple sources!

Airbnb has been a phenomenal success since it was launched just over a decade ago, arguably creating more choice for travellers seeking accommodation while providing a user friendly platform which allows homeowners to rent out a spare room easily. However, it has also faced mounting criticism from various quarters: city officials claim that investors snap up rental properties to add to their Airbnb portfolio, making it more difficult for local residents to find homes to rent; neighbours often complain that Airbnb properties are continuously let out to noisy tourists in residential areas; and hoteliers and regulators argue that Airbnb simply offers a way for unscrupulous businesses to act as hotels whilst avoiding the overheads or regulation.

Internet regulation has been very much in the public eye lately, particularly following the Cambridge Analytica scandal, and the government recently published its Online Harms White Paper which seeks to address some of the concerns surrounding the ‘Wild West Web’. One of the key issues regularly raised is the protection of children from exposure to online pornography.

The debate around workplace monitoring of employees has rumbled on for many years now; employers argue that they are entitled to analyse how their staff spend their working day whilst employees claim it impacts upon their privacy. In 2017 the European Court of Human Rights held, in the case of Bărbulescu v Romania, that the actions of an employer in monitoring the instant messaging accounts of an employee breached Article 8 of the European Convention on Human Rights. But this hasn’t dissuaded some businesses from moving to ever more extreme forms of surveillance; microchipping has already happened in the UK and Amazon has filed patent applications on a warehouse productivity bracelet.

The Government published its Online Harms White Paper on 8 April 2019. The consultation, which is open until 1 July 2019, sets out proposals to reduce illegal and harmful online activity. The harms in scope include:

  • harassment and cyberbullying;
  • hate crime and incitement of violence;
  • terrorism, extremist and violent content;
  • revenge/extreme porn, child sexual exploitation and “sexting” by under-18s;
  • organised immigration crime and modern slavery;
  • encouraging or assisting suicide, self-harm and FGM;
  • coercive behaviour and intimidation;
  • sale of illegal goods (weapons, illegal drugs etc) and illegal uploading of content from prisons;
  • disinformation (fake news); and
  • underage exposure to pornography (this is separately being tackled by the heavily delayed age check scheme, now due to come into force on 15 July 2019).