Getting to know deepfakes

Deepfakes are a form of digital impersonation, in which the face and voice of a person can be superimposed into video and audio recordings of another individual. Much has happened from technological, social and legal perspectives since deepfakes first surfaced in 2017.

Deepfakes are now mainstream

2019 has seen deepfakes go from niche novelty to mainstream phenomenon. As any moviegoer knows, computer-generated special effects are nothing new. But deepfakes have captured the public’s imagination because they can be created with startling accuracy using only a few “source” images.

Deepfakes were first used in 2017 as a tool to digitally superimpose the faces of famous actresses into pornographic videos. Over the last two years however, the deepfake trend has spread beyond the world of celebrities and adult entertainment. Recent research from Dutch technology company DeepTrace reports that by the end of 2018 there were just under 8,000 deepfake videos available online. As of September 2019, that number has nearly doubled to more than 14,600.

Deepfakes are often used for comedic purposes, and many purport to show actors or other celebrities appearing in funny scenarios: Nicholas Cage and Tom Cruise have made frequent appearances in a wide range of absurd film clips. By September 2019 the most downloaded mobile app in China’s iOS store was ZAO, which allows users to swap their faces with film or TV characters using just one selfie as the source image.

Creation has become easier

The deepfake creation process has become easier. Deepfakes are created by using neural networks known as deep learning, which is a subset of artificial intelligence. The earliest users of the technology were computer scientists and researchers, who developed the methods in 2012. Five years later, the codes used to generate deepfakes began to appear on public repositories and platforms, including GitHub and Reddit.

Today, only limited technical know-how is required to create a deepfake, which can be done through a variety of methods. In addition to mobile apps like ZAO, mentioned above, deepfakes can be made through graphical user interfaces or through service portals whereby users upload photos to an online platform. For those wishing to outsource the process entirely, creators can be hired through gig economy marketplaces such as Fiverr. Furthermore, hundreds of YouTube and blog post tutorials are available to help novices develop their skills.

The risks are becoming clearer

People have started to understand the risks that deepfakes pose. By way of recent example, in May 2019 a doctored video of American congresswoman Nancy Pelosi went viral on social media. The video appeared to show Rep Pelosi slurring her words in a drunken manner and was covered widely by news media. Although not strictly a “deepfake” because the methods used to distort the video were more rudimentary, this was an example of audiovisual manipulation which, together with other similar deepfakes of politicians, attracted wide media attention. Accordingly, it is now widely acknowledged and understood that deepfakes can be used for nefarious purposes. These include the manipulation of civil discourse, interference with elections and national security, fraudulent submissions of documentary evidence, as well as the erosion of trust in journalism and public institutions.

Deepfakes can also be used to harm individuals as tools to intimidate, extort, humiliate or defame. Many deepfake creation communities and forums are located on deepfake pornography websites as well as forums including Reddit, 4chan, 8chan, and Voat. It is worth noting in this instance that 4chan and 8chan have infamous reputations for hosting extremist content or even promoting illegal activity. By way of recent example, an energy executive was conned into handing over £200,000 to fraudsters who used a deepfaked voice recording of his boss.

As mentioned above, this is particularly concerning for non-celebrities too, because the social media profile that the average person establishes online provides easy access to photographs and videos, all of which may be used as source material for a deepfake.

Legislators are responding

New, specific laws have been proposed and passed to address deepfakes. There are indeed many existing laws which can potentially be applied to problematic deepfakes, including those concerning fraud, privacy, defamation, stalking and electoral law. But are these legal instruments sufficient to address deepfake risks or are new laws needed? Given the risk that deepfakes pose, some lawmakers assert that new, specific regulations are needed to curtail the proliferation of the technology. There are two broad categories of law which can be used to potentially prevent or mitigate the unwanted proliferation of deepfakes. These are sexual harassment and privacy laws on the one hand and disinformation and electoral laws on the other.

Sexual harassment and privacy laws

Legislatures in the United States as well as the United Kingdom have for several years now sought to address online sexual harassment, with numerous jurisdictions criminalising so-called “revenge porn”. In July 2019, the US state of Virginia enacted House Bill No 2678, becoming the first law to address the dissemination or sale of “falsely created videographic or still images”. The law updates the existing revenge porn laws, making it a misdemeanour to share deepfake photos or videos of a sexual nature without the consent of the subject. Similarly, the state of New York has proposed an amendment to existing civil privacy laws. A08155, which is currently pending in the State Senate, would include “digital replicas” as protectable assets of an individual’s persona (or personality, as known in Europe).

In the United Kingdom, the Law Commission is now conducting a review of the existing criminal law with respect to taking, making and sharing intimate images without consent. This specifically includes potential revisions to the revenge pornography provisions under section 33 of the Criminal Justice and Courts Act 2015, voyeurism offences under section 67 of the Sexual Offences Act 2003, the Voyeurism (Offences) Act 2019, exposure under section 66 of the Sexual Offences Act 2003, as well as the common law offence of outraging public decency.

Media disinformation and electoral laws

The second broad category of potential legislation concerns media disinformation and electoral law. This is of particular importance in the United States, given that Americans are about to enter peak campaigning for their 2020 Presidential election cycle.

In September 2019 Texas became the first state to pass a law regarding political deepfakes. SB751 amended existing electoral law to establish a misdemeanour for creating deepfake videos “with intent to injure a candidate or influence the result of an election” if published and distributed within 30 days of an election. California quickly followed suit in October with the passage of AB-730 which criminalises the distribution of audio or video that gives a false, damaging impression of a politician’s words or actions within 60 days of an election.

The US Congress has also proposed a federal law which is currently before various subcommittees for review. The aptly-titled Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability (“DEEPFAKES Accountability”) Act of 2019 seeks to combat the spread of disinformation through restrictions on deepfake video alteration technology. As currently drafted it would establish (amongst other things) a right on the part of victims of synthetic media to sue the creators and/or otherwise “vindicate their reputations” in court.

The platforms are starting to take action

In addition (or as an alternative) to legal safeguards, technology companies have started to take action to combat deepfake creation and proliferation. While it may be natural to assume that legislation could shield us from harmful deepfakes, the reality is far more complex.

Firstly, the laws mentioned above may not withstand judicial scrutiny from free speech and civil liberties perspectives. Secondly, while a problematic deepfake may be blocked in one jurisdiction, a lack of harmonisation at international level may allow it to spread easily to other parts of the world. Thirdly, and perhaps most importantly, because bad actors can easily remain anonymous online, the prospect of viable enforcement remains doubtful at best.

To fill the regulatory void 2019 has seen technology companies step in to take corrective action. Twitter, Reddit and even the world’s largest pornography website have officially banned deepfake videos from appearing on their platforms. Facebook and Microsoft have joined forces to form the Deepfake Detection Challenge, with $10 (£7.8) million for research and prizes. Likewise, Google has released a dataset of 3,000 deepfake videos in an effort to support researchers working on detection tools.

It remains to be seen if the legal or technological progress made in 2019 will be enough to mitigate the risks associated with deepfakes. Until then, public awareness and internet literacy may be the most useful tools.

Kelsey Farish is a technology and intellectual property solicitor at DAC Beachcroft LLP in London. Email kfarish@dacbeachcroft.com. Twitter @KelseyFarish.

Image Detecting deepfakes, cc by-nd Siwei Lyu.

One thought on “Getting to know deepfakes”

Comments are closed.