Optimising images for search

The way in which we search for images is evolving and changing, and Google has announced that image search is a big topic in the search engine optimisation community.

Once upon a time we would search for images primarily for the purpose of copying and pasting an appropriate image into our presentations or documents. We were using image search as a source of stock photography.

But today, searchers are using image search for more than just stock images. We are using search as part of our buying process or to help us learn something new or to achieve a goal.

Our intention using image search has changed.

Deepfakes are a form of digital impersonation, in which the face and voice of a person can be superimposed into video and audio recordings of another individual. Much has happened from technological, social and legal perspectives since deepfakes first surfaced in 2017.

2019 has seen deepfakes go from niche novelty to mainstream phenomenon. As any moviegoer knows, computer-generated special effects are nothing new. But deepfakes have captured the public’s imagination because they can be created with startling accuracy using only a few “source” images.

Data misuse is often discussed alongside cybersecurity, within the overall context of data protection; but it is important to make the distinction between data which has been obtained legitimately but misused and data which has been collected illegally (eg without consent) or stolen (via computer hacking).

Data theft generally involves a cyberattack or harvesting of data by other means where data subjects are unaware of the collection or modification of their data; this type of cybercrime is largely covered by the Computer Misuse Act. Even where the data is provided knowingly and willingly, its collection may still be illegal if it breaches the Data Protection Act (DPA) or General Data Protection Regulation (GDPR).

The term “data misuse” is normally applied to personal data which has been initially willingly and legitimately provided by customers to a company, but is later used (either by the company or a third party) for purposes which are outside the scope of legitimate reasons for the initial data collection. This is what we will be discussing in this article.

One wet Sunday afternoon I was playing with an interface to OpenAI’s machine learning model, GPT-2, which was trained to predict the next word in a sentence and which can now generate articles of synthetic text based on a sentence provided to it. I typed, “Can AI own the copyright in the work that it has generated?” After a little pause, I would like to say for thought, the AI provided some text which did not make redundant the writing of this article but nevertheless was grammatically correct and very readable. It ended with a flourish saying, “and that is a philosophical, not a legal, question.”

Amusing, but increasingly and sometimes disconcertingly “human”, AI is now part of our daily lives. It finishes our sentences (for example, Gmail’s SmartCompose), it finds the information we need on a myriad of topics (think of Alexa and other voice assistants) and it curates the ads and news we view online [Note 1]. It is also doing things which we think of as being the exclusive reserve of humans; it is creating art works and writing music.

Not only that, but it is also very “clever”. DeepMind’s neural network, AlphaGo Zero, taught itself the complex game of Go and after three days beat its predecessor, AlphaGo, which had itself beaten the 18-times world champion. Other AI models are helping scientists discover new drugs and develop innovations in clean energy.

This raises interesting questions for intellectual property (“IP”). When AI writes an article, paints a picture or creates some music, who owns the resulting copyright in the work? When AI develops a new idea which might be patentable, who owns the invention?

The Law Society, in its report Technology, Access to Justice and the Rule of Law, published September 2019, defines the “Access to Justice Sector” as “Comprised of all organisations supplying access to justice services. It includes law firms, Not for Profits, individual practitioner barristers and solicitors, in-house legal teams, government bodies, academics, LawTech businesses and associations.” This is not very helpful, but nevertheless true. Everyone involved in the legal sector is involved in delivering (access to) justice in some way, though we might debate the merits.

The rise of the gig economy and zero hours contracts – often facilitated by the internet and apps such as Uber, Deliveroo and Limber – has been the subject of vigorous debate over recent years. Governments across the world have been grappling with the implications for employment law and wider society, balancing the boost to the economy and reduction in unemployment with restrictive practices, insecurity of work, low wages and imbalance of power.

Much has been written about the problems surrounding permanence of data once it has been uploaded to the internet – whether it’s a misjudged Twitter comment by a politician from 10 years ago, or a risqué photo from bacchanalian university days which emerges when someone is looking for a job. The difficulty of erasure impinges on a broader philosophical principle – the right to be forgotten – but this term has been most commonly used to describe the stickiness of search results within Google. The specific legal question often asked in this regard is: does Google need to delete search results upon request by individuals?

Seeking one-stop-shop provision is a growing trend amongst law firms on their quest for convenience, efficiency, support, cost and security improvements. By having their main software and outsourcing service needs met by one primary supplier, legal practices gain all these benefits and more. To clarify…

  • Convenience: There’s one contract and one point of contact which saves time and hassle for your time-starved lawyers and business managers.
  • Efficiency: It’s all about integration. Your core applications are synchronised, including your Microsoft Office suite, thereby streamlining the individual user experience.
  • Support: You’re able to build strong relationships with your assigned team members, be it your account manager, cashier or payroll clerk. Belonging to the same company, there’s consistency in the level of customer service you receive.
  • Cost: Whichever combination of products you choose, there’s one sales consultant compiling your fees, giving you complete visibility and allowing total flexibility as costs reflect your busyness.
  • Security: By carefully selecting a supplier with robust safety measures in place, you’re trusting your confidential data and documents to only one reliable source which significantly lessens any potential for security breaches.

Live facial recognition technology or automatic facial recognition (AFR) adds another dimension to CCTV monitoring and other surveillance methods. Using biometrics (certain physical and physiological features), the technology can map facial features to identify particular individuals by matching these with a database of known faces. This technology has been in use for some years by certain public and government agencies, but with the advent of AI and machine learning, it has become more prevalent in the private sector.

Back in 2006, Sheffield mathematician Clive Humby declared “data is the new oil” after reaping the benefits of helping to set up a supermarket loyalty card scheme. This was the same year that Facebook went mainstream, accelerating the pace of data harvesting and spawning an entire industry devoted to the collection, analysis and monetisation of large sets of personal data. Although many concerns were raised over the following years regarding the potential dangers of the big data revolution which ensued, arguably it wasn’t until the Cambridge Analytica scandal broke in 2018 that the public – and their parliamentary representatives – began to grasp the true gravity of the situation.

“This project contains risks of abuse of dominant position, risks to sovereignty and risks for consumers and for companies” (Bruno Le Maire, French Finance Minister)

In June Facebook announced to much public fanfare that it intends to roll out a new digital currency called Libra for use in 2020, allowing its users across the globe to make online financial transactions.

It has quickly become clear that Facebook faces a significant battle to ensure that the Libra project does not become mired in regulatory and political red tape and, more damagingly still, is not able to launch across key national/regional markets. At the last count, the roll-call of those who have signalled a desire to subject Libra to careful scrutiny (as well as Mr Le Maire whose quote above makes it quite clear what he thinks) includes leaders of the G7 nations, the US Congress, the Committee on Payments and Market Infrastructure comprising representatives of 26 central banks, the European Commission and the Swiss Financial Market Supervisory Authority.

Why is Facebook Libra attracting so much critical scrutiny and what are the key issues that regulators and politicians are likely to focus on now that the project is under the public gaze?

Juriosity.com was launched in partnership with the Bar Council of England and Wales in 2018. In its current form, the platform provides a directory of practising barristers and other legal professionals and a self-publishing platform enabling barristers (and other approved contributors) to publish short articles on legal developments, cases they have been involved in (or wish to comment on) and any other topics they believe will be of interest to their clients and potential clients.

Phase two of the development of Juriosity will add direct access functionality and a marketplace for the purchase and sale of precedents, contracts, guides and other collateral.