ChatGPT: more questions

Following his recent article on ChatGPT’s implications for the legal world, Alex Heshmaty garners answers to further questions from Dr Ilia Kolochenko.

Who owns the copyright of ChatGPT responses?

This now-rapidly evolving question is largely unsettled among jurisdictions, in most cases probably no-one.

Is it possible for original copyright holders to prevent ChatGPT (or Bard) from using their content whilst “training”?

Yes, if your content is publicly accessible, you can amend your website’s terms of service to expressly prohibit collection and subsequent use of your content for any AI training purposes or related tasks. Just make sure that your terms of service will be enforceable in your jurisdiction, so talk to a lawyer.

What can a university do if it suspects students of using ChatGPT to generate assignments/Masters/PhD content?

Some disciplines, such as cybersecurity or law, may craft uncommon tasks and highly specific assignments in such a manner that ChatGPT will either fail to provide anything meaningful or will visibly provide incomplete and over-general answers. For disciplines where AI-generated content may fit, universities may introduce and promulgate a policy saying that the use of AI-generated content amounts to plagiarism and may lead to the revocation of a degree at any time. As of today, there are no reliable tools to detect AI-generated content in an error-free manner, but they will likely emerge in the near future, so all currently submitted works will be verified later.

What about a company which suspects a supplier has used ChatGPT to generate work?

Companies may expressly prohibit such practices in their agreements with suppliers and introduces contractual penalties (in those jurisdictions where permitted). In some jurisdictions, under certain circumstances, such practices may also be subject to criminal prosecution under unfair competition laws for example.

What is the GPT API – and how can businesses minimise their potential liability (eg data protection) if they implement a GPT chatbot on their website?

As of today, while AI is largely unregulated, a well-thought-out terms of service will probably be a tenable solution. Once countries start implementing AI laws, however, thorough compliance will be required like with GDPR (i.e., you cannot contractually disclaim liability from unlawful data processing).

Are there any specific cybersecurity threats they should be aware of?

Leak of corporate data and trade secrets is probably the most serious risk as of today. Personal data disclosure is not the big issue in my opinion, as few companies will copy-paste large data sets of PII to chatbots. On the other side, for example, PR teams of SEC-regulated companies, may naively copy-paste highly confidential M&A press release that – if disclosed earlier – can lead to insider training and other strongly undesirable outcomes.

What should regulators be doing to reduce the risks posed by ChatGPT and other generative AI?

Regulation of training data is probably one of the most important factors. All major vendors must transparently, comprehensively and regularly disclose sources of the data they use for AI training. This will prevent misappropriation of copyrighted or otherwise protected material, ensure transparency and thus a certain degree of predictability of AI, and permit authors to create content to control the fruits of their intellectual labour. To be efficient, the regulation shall have a retroactive effect, so all currently existing AI models will be subject to disclosure.

Do you think future versions of GPT could make lawyers redundant?

Not in the next decade. Moreover, already this year, lawyers will likely have even more work than before, stemming from litigation over flawed contracts and other legal documents created by AI. Replacing your lawyers with AI is a reliable recipe for litigation or even criminal prosecution. That being said, lawyers will increasingly optimize and accelerate their work with AI-driven legal tools that, however, have been available since 2014.

Dr Ilia Kolochenko is a Chief Architect & CEO at ImmuniWeb, a global application security company. He is also an Adjunct Professor of Cybersecurity Practice & Cyber Law at Capitol Technology University. Dr Kolochenko has an LLM in Information Technology Law and MSc in Criminal Justice (Cybersecurity & Cybercrime Investigations).

Alex Heshmaty is technology editor for the Newsletter. He runs Legal Words, a legal copywriting agency based in the Silicon Gorge. Email alex@legalwords.co.uk.

Photo by Maxim Ilyahov on Unsplash.