Chatbots have been around since the 1960s and coders have been trying to pass the Turing test ever since, creating increasingly sophisticated iterations of natural language processing (NLP) software. A recent episode, where a Google engineer was sacked for claiming that the search engine’s chatbot generator software known as LaMDA was sentient, perhaps demonstrates the leaps and bounds that NLP has made over the past few years. However, it’s only with the public release of a new chatbot called ChatGPT that the potential of NLP has been taken seriously by the wider public.
What is ChatGPT and what are its implications in legal?
ChatGPT is a conversational piece of software released by OpenAI, designed to answer questions posed in natural language and even have a dialogue with users. It has been trained on a multitude of online data from Wikipedia to Reddit, although the information is only correct up until 2021. As well as answering general queries and therefore being a potential threat to Google, it also has the ability to write bespoke articles on any topic which is sparking off existential debates amongst academics and professional writers.
IP and technology lawyers should start thinking about the potential impact of burgeoning AI content generators such as ChatGPT on their clients, particularly with regard to potential copyright breaches facilitated by the software. At the same time, innovative legal minds may be considering the potential of harnessing ChatGPT as a form of legal technology to make efficiency gains.
Many questions have already been raised about the copyright infringement risks of using ChatGPT and other creative content generation tools. It is a topic which has been covered previously in this newsletter by Shireen Smith who noted that:
“From a copyright law perspective, the initial use of copyright works by the platform for machine learning is infringing unless the platform used licensed or out-of-copyright works or could rely on a copyright defence, such as use for research purposes.”
Since ChatGPT generates its responses using vast amounts of existing online data, it’s likely that this includes copyright material. Although OpenAI may be covered by the research purpose exemption, users who generate responses in ChatGPT for commercial use could arguably be causing a breach in copyright. But there are some questions which make this a very grey area:
- Is it possible to identify the source/s used in a response?
- Since ChatGPT generates original content and does not plagiarise material, does this constitute a breach of copyright?
- If there is a breach of copyright, is OpenAI or the user considered the infringing party?
We will need to await the outcome of early litigation in the area of generative AI – such as a copyright action brought by Getty Images against the developers of Stable Diffusion (which generates images) and a class action lawsuit against GitHub Copilot (which generates software code) – to find out how courts are likely to apply copyright law to this emerging technology.
Although copyright is the most obvious form of IP touched upon by ChatGPT, its potential to generate company names or slogans could also impinge upon trade marks.
Whenever the prospect of computers replacing the jobs of lawyers is raised, technology vendors always make reassurances that their products will only increase efficiency of law firms rather than making staff redundant. However, automation tools and voice recognition software are already leading to a reduction in legal secretaries and support staff. Considering that ChatGPT seems able to pass legal exams including the bar exam, could it actually start providing basic legal advice?
In terms of consumer rights law, at least in the realm of low value claims, chatbots such as DoNotPay are already routinely enabling individuals to pursue legal claims. The founder of DoNotPay, Joshua Browder, is even deploying generative AI to help customers negotiate contracts with corporate chatbots. However, this type of legal work is all very process-driven and therefore it’s feasible to program AI to handle basic requests from clients.
When it comes to bespoke legal advice, although ChatGPT will generally provide confident sounding responses to legal queries, it’s no more trustworthy than conducting a Google search and relying on the first result. It will often make assertions which are incorrect and sometimes nonsensical – a phenomenon known as artificial intelligence hallucination. It simply cannot be relied upon to provide up-to-date or meaningful legal advice, at least in its current form. Having said that, if a legal publisher such as LexisNexis were to grant access to its own resources, this could be a game changer.
So is there any meaningful way that lawyers can currently use ChatGPT? I put that question to the chatbot and it claimed to be able to help lawyers with document generation, legal research and automating client interaction. As discussed, its research ability is only as good as a Google search, and in terms of document generation it will often just use freely available legal document templates which may be out of date, only relate to certain jurisdictions or be of low quality. As for automating client interaction, chatbots are already used in the legal sector (eg Billy Bot) but they are essentially signposting tools.
Lawyers who need to prepare blogs or marketing materials for their websites could potentially use ChatGPT to create a skeleton outline – which may be helpful if they are experiencing writer’s block. But they should meticulously check anything factual which is outputted by the AI.
I asked ChatGPT whether it will replace lawyers, and here is what it said:
“No, ChatGPT is not likely to replace lawyers. While it can provide answers to simple legal questions and assist with legal research, ChatGPT is not a substitute for a trained and licensed attorney. The legal profession involves much more than just answering questions and researching laws. Lawyers play a crucial role in interpreting and applying the law, representing clients in court, negotiating settlements, and providing legal advice based on their client’s unique circumstances. These tasks require a level of expertise, judgment, and human interaction that goes beyond what can be provided by an AI model like ChatGPT.”
But even if ChatGPT, in its current form, may not be all that helpful for lawyers, other similar conversational AI tools are coming to market tailored for the legal sector. For example, a product called LawDroid Copilot is being touted as a virtual legal assistant for lawyers – and it’s worth noting that one of the testimonials states: “I was going to hire a paralegal, but after trying out LawDroid Copilot, I now have the help I need.” So although lawyers may be safe for now, AI is arguably already nipping at the heels of other legal professionals.
A threat to academia
Since ChatGPT can write essays on any topic within seconds (or even pass an MBA exam), teachers and lecturers have been grappling with the problem of students getting the AI to produce their homework or complete assignments and coursework.
Lawyers advising universities and schools will often need to tackle policies and contracts which reference plagiarism and cheating. The only piece of relevant legislation in the UK is the Skills and Post-16 Education Act, which bans contract cheating (ie where a third party such as an “essay mill” provides work for a student for a fee, who then passes this off as their own work). However, where an academic employed by a university has plagiarised work, this may constitute a sackable offence and therefore impinges upon employment law. Meanwhile pupils who cheat at school may need to be excluded, in which case educational law comes into play. Even in the non-academic world, cheating in professional exams can result in substantial penalties.
Academic lawyers will first need to work with their clients to understand the application of ChatGPT and other AI generative technologies to the concepts of plagiarism and cheating, before updating policies and procedures.
ChatGPT – legal challenges, legal opportunities – Fieldfisher
ChatGPT is a data privacy nightmare – The Conversation
DALL·E goes commercial, but what about copyright? – TechnoLlama