Artificial intelligence (AI) is increasingly relevant to our working lives, our medical issues and our education system, providing opportunity and risk. In March 2023, world leaders in AI development warned governments that AI (in particular Artificial General Intelligence and Artificial Super Intelligence) could present an extinction level risk to humanity, comparable to nuclear war.

In April 2024, Elon Musk claimed that superhuman AI – smarter than anyone on Earth – could exist within a year. We know it will likely wipe out millions of jobs at the same time as radically improving medical diagnosis and treatment. Against this backdrop, the issue of how AI affects authors’ and creators’ rights might be deemed rather trivial, but it is just one example of the many different implications of AI with which regulators and legislators are scrambling to keep up.

While it is early days in terms of AI intellectual property litigation, a few cases in the UK and elsewhere are beginning to explore the relevant legal issues.

Can generative AI infringe traditional copyright works?

Generally speaking, copyright laws protect an author’s creative work from being reproduced (without permission) in whole or in part. The analysis of whether such a reproduction has taken place is considered from a qualitative perspective (not simply a line-by-line or image-by-image comparison). Generative AI models take a vast amount of creative material, analyse it and extrapolate new content from it, leaving potentially very few or no lines or images in common with the material they have “learned” from.

The legal liability of AI companies for these AI models can be analysed in two stages:

  1. at the input stage where copies may be made of original works in order to train the AI model, and
  2. at the output stage where AI-generated works may infringe the original copyright works used to train the mode.

In the first stage, to the extent original copyright works have been used to train an AI model, it seems, on a straightforward interpretation of English copyright law, that copies may well have been made of those works. This would infringe section 17(2) of the Copyright Designs and Patents Act 1988 (the “CDPA 1988”) which includes storing a copy in any medium by electronic means. There is no knowledge requirement that the work is protected by copyright. The AI company may also have issued copies or communicated a copyright work to the public (section 16 of the CDPA 1988) by incorporating it into the model.

In the second (or output) stage, where the AI model is used to generate a work that may reproduce the whole or a substantial part of the copyright works at issue, this may also be contrary to the copying and communication to the public provisions of sections 16(2) and 17 of the CDPA. Something may have been reproduced from underlying original copyright works used to train the model but proving that evidentially, and whether the scale of it is sufficient to be justiciable, may make enforcement in respect of the output stage harder.

AI and creative works

In the United States, an open letter from the Authors Guild to the CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft has called on them to obtain consent, credit, and fairly compensate writers for the use of copyrighted materials in training AI. Signatories include authors Margaret Atwood, Jonathan Franzen and Celeste Ng. Comedian Sarah Silverman has commenced legal proceedings in California against Open AI and Meta for allegedly using her books in training their ChatGPT and LLaMa AI systems without her permission. While it can be argued that anyone can read these writers’ books, learn their method or style and mimic them, it is the scale and depth of the learning by AI and the subsequent commercialisation of that learning (potentially in competition with the original author) which is arguably inequitable.

The music industry is also complaining to governments and AI companies, insisting on consent, crediting and compensation for musicians whose musical copyright work is used to train AI. The International Confederation of Music Publishers (ICMP), represents 90% of the world’s published music and has launched a new online resource, where rightsholders can reserve their rights against unlicensed exploitation of their works.

In respect of artistic copyright works, similar litigation has been launched in the UK, although it has not yet come to trial. In May 2023, Getty Images asked the High Court to prevent Stability AI selling its Stable Diffusion system in Britain, saying that the London-based company had illegally used the agency’s stock library to train the image creation tool, by scraping a large number of original copyright works from the Getty Images websites. The case includes database rights, trade mark and passing off claims, as well as copyright claims. At the end of 2023, Getty Images successfully defended a summary judgment and/or strike out application by Stability AI based largely on a jurisdiction point that the copyright works at issue were never downloaded onto a server in the UK. Therefore, the claims of copyright and database rights infringement should be summarily decided or struck out. The judge disagreed and concluded that the Claimant had a real prospect of success at trial on the point.

It is hard to see what defences might be available in the UK to the commercial AI companies in respect of the input stage. The defence of Section 29A of the CDPA, which permits the making of copies of text and data analysis for non-commercial research, is unlikely to apply because the use is commercial. Likewise, for the defence of transient copying under section 28A, the copies need to have no independent economic significance, which seems unlikely because the copies form the very basis of the economic significance of the model. None of the fair dealing offences would seem to apply either to an AI model, although this would depend on the facts of the particular case.

Ongoing litigation is focusing on the AI companies rather than users of AI models, but a user may also be potentially liable for secondary infringement under section 23 of the CDPA by possessing or distributing, in the course of business, an article which they had reason to believe was an infringing copy of a copyright work. Secondary infringement (albeit in respect of a group company of the creator of the model rather than a third-party user of it) was discussed in the interim decision in the Getty Images case, in particular whether sections 22 and 23 of the CDPA are limited to dealings in “articles” which are tangible things or whether they may also encompass dealings in intangible things (such as making available software on a website). The judge refused to enter summary judgment on the secondary infringement claims, concluding that the claimant was not bound to fail on the point and the point would be better decided at trial.

Likewise, if a user uses the AI platform either online or via a downloaded copy to generate output, that output may potentially reproduce parts of underlying copyright works used to train the model, although this may be harder to prove or may not amount to sufficient reproduction to constitute an infringement. Rights holders are sensibly going after the AI companies first, but use of AI models is not without litigation risk to companies and individuals.

Governments are running to catch up with the implications of AI. In 2023, following the Vallance Review, the UK Intellectual Property Office (IPO) attempted to devise a voluntary code of conduct for AI companies, but it failed to find sufficient agreement amongst stakeholders. In February 2024, the UK Government announced it had not been possible to draw up the code and there would be a further period of discussion with stakeholders. The Government is looking also to what the EU does in this sphere. The EU AI Act (recently approved by the EU Parliament) requires disclosure of data sets used to train foundation models which could go some way to helping copyright owners take action against AI companies for infringing their rights.

In practice, existing copyright laws should cover the majority of these situations, once we have some further judicial guidance from the various test cases that will pass through the courts.

If you have questions or concerns about AI and copyright infringement, please contact Lucy Harrold.

 

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.