At the time of writing, 100 million people around the world have used ChatGPT and more than 15 billion images have been created using text-to-image algorithms since last year.
Worryingly, 68% of employees have not informed their boss that they are using artificial intelligence generated content (AIGC) when undertaking certain tasks such as writing emails and marketing/sales content, scheduling meetings, creating images, and analysing data.
The reason lack of employer oversight is concerning is that the law surrounding AIGC is, to put it generously, unfit for purpose, especially regarding intellectual property (IP). This article, part one in a two– part series, will provide a snapshot of the latest information around the issue of whether AIGC can be protected under copyright law.
Copyright law is governed by the Copyright, Designs and Patents Act (CDPA) 1988. Copyright seeks to protect the form of creative ideas, not the ideas themselves (these can be protected via confidentiality). Copyright provides a vehicle for the authors of original work to protect their creativity and stop others from using it without permission for their own advantage.
The following categories of works are protected under UK copyright law:
Both primary and secondary works are protected under the CDPA 1988, though primary works receive stronger protection because they require more significant amounts of creativity and originality.
In the case of literary, dramatic, musical, or artistic works, the author or creator of the work is usually the first owner of any associated copyright. The exception to this is if any of the aforementioned works are created by an employee in their course of their employment. In this case, the employer is the copyright owner unless there is an agreement to the contrary. Where there are two or more authors who have created a work, they may have joint ownership of the copyright if their contributions are indivisible or co-authorship where separate contributions can be identified.
Under the CDPA 1988 computer-generated works are defined as “generated by computer in circumstances such that there is no human author of the work”. Therefore, the law suggests content generated by an artificial intelligence (AI) can be protected by copyright (more on this below).
Let us imagine that one of your employee logs onto ChatGPT and inputs the following:
“1000 words on why triple glazing is better than double glazing”
ChatGPT provides the employee with a 1000-word output. They lightly edit the piece, for example, by adding a call to action, and then publish it on the organisation’s website as a blog.
Who owns the copyright? There are five possibilities:
We can discount possibility one under the CDPA 1988 as the AIGC was made in the course of employment. Possibility four can also be dismissed because the CDPA 1988 does not recognise a non-human as the author or owner of a work. And given the Government’s response to the 2021 AI consultation, this stance is unlikely to change in the near future. Possibility three cannot apply because under Open AI’s terms and conditions, “Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.”
This leaves possibility two and five. The latter is currently being fought out in various lawsuits across both sides of the Atlantic.
Therefore, we are left with possibility two – the employer. The next challenge is to establish whether the AI created article can fulfil the CDPA 1988 requirements of originality, authorship, ownership, and duration of the copyright.
It is arguable that the current level of sophistication of AICG does not allow for originality. Everything ‘created’ by AICG is already in existence. The developers simply scraped pre-existing content from the internet (without permission, hence the lawsuits) and trained their models on the enormous streams of pre-existing data. The employee cannot be the true ‘author’ of the article (thereby allowing them to pass on ownership to their employer) because they did not create it. We have already established that ChatGPT cannot be the author/owner of the work, and Open AI has assigned its rights to the person who inputs the request into ChatGPT. The issue of duration of the copyright also creates problems as in many cases, the length of the copyright protection is attached to the lifespan of the author. And as you may have guessed, machines cannot die.
The answer to the question – who owns the copyright of an AICG work is, under current copyright law…no one, because the current legislation does not cover AICG. The above paragraph is confusing and contradictory because that is the current state of the law.
At present, AIGC lacks protection under the provisions of the CDPA 1988. Interestingly, United States District Court Judge Beryl A. Howell recently ruled that AI generated artwork cannot be copyrighted under current US law. In her decision, Judge Howell wrote that copyright has never been granted to work that was “absent any guiding human hand,” adding that “human authorship is a bedrock requirement of copyright.”
Although AIGC does not benefit from copyright protection under the current CDPA 1988, this does not mean that the law cannot be amended to change the status quo. The Act is already contradictory, given that “the legal concept of originality is defined with reference to human authors and characteristics like personality, judgment, and skill” but originality can be applied to computer-generated work.
By amending the Act to extend authorship to non-human authors, not only could end-consumers rely on some form of IP protection, but it would also encourage investment in AI technology because innovators would be able to rely on IP law to protect their creative efforts.
In part two of this series on AIGC and copyright we will examine the risks of copyright infringement, both when training AI models and using the outputs of AI tools.
To discuss any of the points raised in this article, please contact Marcus Rebuck or fill in the form below.
In an age where technology intertwines seamlessly with our daily lives, safeguarding personal data has become a paramount concern. Recently, Zoom, a prominent player in the virtual communication realm, found itself at the heart of a controversy that shed light on the delicate balance between AI advancement and customer data privacy. The company’s policy changes related to AI training on customer data sent shockwaves through the tech community, prompting a swift reversal and a renewed commitment to protecting user information.
In March 2023, Zoom introduced amendments to its terms and conditions which seemingly granted the company extensive latitude in utilising customer data for training artificial intelligence (AI) models. These amendments were not noticed until early August and once they came to public attention they set off a storm of public concern and scrutiny. Reports from various media outlets questioned the potential ramifications of these policy shifts on user privacy and the ethics of data usage.
The uproar sparked by the policy changes compelled Zoom to respond swiftly and decisively. The company published a blog post on 7 August 2023 which it subsequently edited on 11 August 2023 outlining its stance. In the post, Zoom clarified that it had no intention of exercising the sweeping rights granted by the revised terms. The company went further, asserting its commitment to customer data privacy and its respect for user concerns.
The company’s subsequent policy update explicitly stated that AI models would not be trained using customer video, audio, or chats without obtaining consent from the customers themselves. This commitment to obtaining explicit permission before utilizing personal data for AI training purposes marked a significant step toward safeguarding user information.
Zoom’s experience serves as a poignant reminder of the growing tension between technological advancement and individual privacy rights. The incident has broader implications for the tech industry as a whole. It highlights the importance of transparent communication, robust privacy policies, and a proactive approach to addressing user concerns in the face of evolving technologies.
Find out more from Ann-Maree Blake and our Data Protection and Privacy service.
Love or hate the idea (and many people fall into the latter category), AI language and text to image models have arrived. Now anyone can create prose, programmes, and pictures in mere seconds simply by entering a few instructions on a website. You may be thinking “wonderful, no more dull report and contract writing”. However, there are serious concerns around the accuracy of the information ChatGPT is producing. In addition, the lawsuits by artists, engineers, and other creatives against AI language and art model developers are mounting. There are also potential legal issues for users of ChatGPT, such as copyright infringement and defamation.
Before exploring these legal challenges, it is useful to explain what AI language and art models are. For ease of reference, I will refer to the most well-known, ChatGPT, but the basic principles apply to most other chatbots such as Meta’s Llama, and Google’s Bard.
ChatGPT, which stands for “Chat Generative Pre-trained Transformer”, was created by Open AI and launched in November 2022. It is considered the most significant technological development since the launch of the Apple iPhone in 2007. It can produce human-like responses to a vast range of questions and is often (but not always) accurate.
ChatGPT works by predicting the next word in a series of words. It is underpinned by an enormous language model, created by Open AI feeding into it some 300 billion words systematically scraped from the internet in the form of books, articles, websites, and blog posts. ChatGPT used the data provided to learn how to predict the next word. Eventually, it became sufficiently trained to produce human-like responses to tasks given to it via the front-end ‘Chat’.
Preston Gralla provided a brilliant analogy for how AI language and text to image models operate in a recent article:
“To do its work, AI needs to constantly ingest data, lots of it. Think of it as the monster plant Audrey II in Little Shop of Horrors, constantly crying out “Feed me!”
Open AI and other developers of AI text and image-generating models did not seek permission to use third-party words and art to feed their creations. This fact forms the basis of several class legal actions currently underway around the world.
The basis for legal claims against ChatGPT and other language and image-generating models fall into several categories:
Although ChatGPT and its offshoots may seem like a productivity dream come true, caution must be taken when using it to produce written text and images for business purposes. There may be issues concerning copyright and breach of the GDPR and Data Protection Act 2018. In addition, as demonstrated by the defamation lawsuit brought by the mayor of Hepburn Shire, there may be serious legal consequences for organisations if ChatGPT makes mistakes or demonstrates bias, both of which it can do. To avoid potential claims, businesses and individuals must undertake a risk assessment before utilising ChatGPT for particular projects and establish robust due diligence checks on the accuracy and impartiality of the content it produces.
ChatGPT represents an exciting and unknown future for businesses and people alike. To discuss any of the points raised in this article, including undertaking risk assessments, please contact Ann-Maree Blake.
trusted legal excellence
Contact us today to discover how we can support you with legal solutions that stand out from the rest.
Get in Touch