The UK’s data protection landscape is continuing to evolve. The Data (Use and Access) Bill (the “DUA Bill”) received Royal Assent last month and has been enacted as the Data (Use and Access) Act 2025 (the “DUA Act”). The DUA Act aims to complement existing UK data protection laws by enhancing transparency, promoting responsible data sharing, and reinforcing the protection of individual rights.
This article explores some of the key changes being introduced by the DUA Act and outlines practical steps organisations can take to prepare for the new legislation and ensure ongoing compliance.
One of the most hotly contested issues that stalled the DUA Bill’s progress was around the treatment of artificial intelligence (“AI”). The emergence of AI models raised significant concerns, particularly around copyright materials being used by developers for training their Large Language Models.
The House of Lords pushed for amendments to the DUA Bill to include stricter provisions on the use of copyrighted content, advocating for mandatory transparency requirements. Some of the UK’s leading music artists, including Sir Elton John, Sir Paul McCartney and Dua Lipa, spoke out in support of these changes. (Dua Lipa’s high profile involvement even led to the legislation being jokingly referred to as the “DUA Lipa Bill”). These artists warned that, without such safeguards, tech companies could exploit intellectual property and be given free rein to use content without having to compensate the creators.
However, the House of Lords’ efforts were unsuccessful. The Government ultimately resisted the proposed changes, arguing that the DUA Act was not the appropriate legislative vehicle to address such complex and evolving issues. Eventually a compromise was reached and a government report on AI and copyright is due to be published later this year, which will explore possible changes and enforcement measures.
The DUA Act establishes a more robust legal framework for data access and sharing. It updates and reforms existing UK data provisions and e-privacy laws and includes broader data policy initiatives aimed to encourage use of data in the public interest, while maintaining safeguards for individual rights to privacy.
While some critics argue the DUA Act simply reinforces and codifies existing legislation, the cumulative effect of the changes could be significant from a compliance and operational perspective.
The new Information Commission will be issuing guidance on the DUA Act, but this is not scheduled to be coming out any time soon and may not be until next year. As we await further details and secondary legislation, organisations should take this opportunity to proactively review and assess their existing documentation and policies to ensure a smooth transition.
The DUA Act is part of a broader trend towards a more flexible and accountability-driven approach to how data is being governed in the UK. While some key aspects, such as AI and copyright, are subject to secondary legislation and further guidance to be published, the direction the Government is taking towards modernisation is clear.
Organisations that begin reviewing and assessing their processes and provisions now, will be better placed to ensure legal compliance and avoid regulatory risk in the future.
Our experienced Data Protection and & Privacy team is available to provide further advice or answer any questions you may have about the DUA Act. Please do not hesitate to get in touch.
Read MoreIt is because of AI’s potential to change the world – for both good and bad that many feel it needs to be regulated. In May of this year, the CEO of Google Sundar Pichai said that “AI is too important not to regulate, and too important not to regulate well.”
The explosion of AI and the hype around it has led to an increasing degree of regulatory scrutiny and around the world we are seeing different approaches being taken to regulation of AI but also some commonalities in the approaches.
Watch Ann-Maree Blake‘s keynote presentation at the Consulegis conference event in Cardiff.
Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches:
The EU has put forth a broad and prescriptive proposal in the AI Act which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The UK, in turn, has decided to abstain from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay.
In the EU the thinking is about “empowering people with a new generation of technologies” whereas in the UK the thinking is about “driving growth and unlocking innovation.”
The EU is looking to put in place what is probably the most ambitious framework in the world in terms of AI regulation.
It is very much aiming to be a global leader in AI regulation, in much the same way as it has with data protection via the GDPR.
What the EU AI Act does it that it looks at the risk of AI systems and it tries to deal with the risks in a practical way by categorising AI systems into 4 levels of risk which are: unacceptable, high, limited, and minimal or no risk.
If it is passed, it will require companies to assess AI system risks before those systems are put into use.
Companies will be required to obtain permits for high-risk AI, and provide transparency and accountability for those high risk AI systems.
The UK is not proposing to create umbrella legislation. Instead its approach, which is set out in a white paper published in March 2023 sets out the ambition of the UK being “the best place in the world to build, test and use AI technology”.
Broadly speaking the approach in the White Paper Establishing a pro-innovation approach to AI regulation rests on two main elements: firstly, AI principles that existing regulators (such as the ICO and the FCA (Financial Conduct Authority)) will be asked to implement, and secondly, a set of new ‘central functions’ to support this work.
The regulation of AI is a complex and whether there will eventually be a true global standard for AI regulation remains to be seen.
However, companies should not wait until there is a clear regulatory framework in place in their jurisdiction as there are key things they can do now including:
To discuss any of the points raised in this article, please contact Ann-Maree Blake or fill in the form below.
Read MoreAt the time of writing, 100 million people around the world have used ChatGPT and more than 15 billion images have been created using text-to-image algorithms since last year.
Worryingly, 68% of employees have not informed their boss that they are using artificial intelligence generated content (AIGC) when undertaking certain tasks such as writing emails and marketing/sales content, scheduling meetings, creating images, and analysing data.
The reason lack of employer oversight is concerning is that the law surrounding AIGC is, to put it generously, unfit for purpose, especially regarding intellectual property (IP). This article, part one in a two– part series, will provide a snapshot of the latest information around the issue of whether AIGC can be protected under copyright law.
Copyright law is governed by the Copyright, Designs and Patents Act (CDPA) 1988. Copyright seeks to protect the form of creative ideas, not the ideas themselves (these can be protected via confidentiality). Copyright provides a vehicle for the authors of original work to protect their creativity and stop others from using it without permission for their own advantage.
The following categories of works are protected under UK copyright law:
Both primary and secondary works are protected under the CDPA 1988, though primary works receive stronger protection because they require more significant amounts of creativity and originality.
In the case of literary, dramatic, musical, or artistic works, the author or creator of the work is usually the first owner of any associated copyright. The exception to this is if any of the aforementioned works are created by an employee in their course of their employment. In this case, the employer is the copyright owner unless there is an agreement to the contrary. Where there are two or more authors who have created a work, they may have joint ownership of the copyright if their contributions are indivisible or co-authorship where separate contributions can be identified.
Under the CDPA 1988 computer-generated works are defined as “generated by computer in circumstances such that there is no human author of the work”. Therefore, the law suggests content generated by an artificial intelligence (AI) can be protected by copyright (more on this below).
Let us imagine that one of your employee logs onto ChatGPT and inputs the following:
“1000 words on why triple glazing is better than double glazing”
ChatGPT provides the employee with a 1000-word output. They lightly edit the piece, for example, by adding a call to action, and then publish it on the organisation’s website as a blog.
Who owns the copyright? There are five possibilities:
We can discount possibility one under the CDPA 1988 as the AIGC was made in the course of employment. Possibility four can also be dismissed because the CDPA 1988 does not recognise a non-human as the author or owner of a work. And given the Government’s response to the 2021 AI consultation, this stance is unlikely to change in the near future. Possibility three cannot apply because under Open AI’s terms and conditions, “Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.”
This leaves possibility two and five. The latter is currently being fought out in various lawsuits across both sides of the Atlantic.
Therefore, we are left with possibility two – the employer. The next challenge is to establish whether the AI created article can fulfil the CDPA 1988 requirements of originality, authorship, ownership, and duration of the copyright.
It is arguable that the current level of sophistication of AICG does not allow for originality. Everything ‘created’ by AICG is already in existence. The developers simply scraped pre-existing content from the internet (without permission, hence the lawsuits) and trained their models on the enormous streams of pre-existing data. The employee cannot be the true ‘author’ of the article (thereby allowing them to pass on ownership to their employer) because they did not create it. We have already established that ChatGPT cannot be the author/owner of the work, and Open AI has assigned its rights to the person who inputs the request into ChatGPT. The issue of duration of the copyright also creates problems as in many cases, the length of the copyright protection is attached to the lifespan of the author. And as you may have guessed, machines cannot die.
The answer to the question – who owns the copyright of an AICG work is, under current copyright law…no one, because the current legislation does not cover AICG. The above paragraph is confusing and contradictory because that is the current state of the law.
At present, AIGC lacks protection under the provisions of the CDPA 1988. Interestingly, United States District Court Judge Beryl A. Howell recently ruled that AI generated artwork cannot be copyrighted under current US law. In her decision, Judge Howell wrote that copyright has never been granted to work that was “absent any guiding human hand,” adding that “human authorship is a bedrock requirement of copyright.”
Although AIGC does not benefit from copyright protection under the current CDPA 1988, this does not mean that the law cannot be amended to change the status quo. The Act is already contradictory, given that “the legal concept of originality is defined with reference to human authors and characteristics like personality, judgment, and skill” but originality can be applied to computer-generated work.
By amending the Act to extend authorship to non-human authors, not only could end-consumers rely on some form of IP protection, but it would also encourage investment in AI technology because innovators would be able to rely on IP law to protect their creative efforts.
In part two of this series on AIGC and copyright we will examine the risks of copyright infringement, both when training AI models and using the outputs of AI tools.
To discuss any of the points raised in this article, please contact Marcus Rebuck or fill in the form below.
trusted legal excellence
Contact us today to discover how we can support you with legal solutions that stand out from the rest.
Get in Touch