The UK copyright law position on AI-generated creative works is simultaneously more settled and more contested than the public debate suggests. The settled part is this: British copyright law has contained a provision for computer-generated works since the Copyright, Designs and Patents Act 1988. The contested part is everything that the provision means when applied to outputs generated by large language models, image synthesis systems, and music generation tools that the Act’s drafters could not have imagined.
The practical stakes are high. The UK creative industries contributed over £124 billion to the UK economy in 2022, according to the Department for Culture, Media and Sport’s Creative Industries Economic Estimates. Writers, visual artists, musicians, and filmmakers are confronting AI systems trained on their work without consent, capable of generating outputs that compete with their commercial output. The legal framework that governs who owns those AI outputs, and whether training on copyrighted material without a licence is lawful in the UK, is the most consequential unresolved question in British intellectual property law in 2026.
What the CDPA 1988 Actually Says
Section 9(3) of the Copyright, Designs and Patents Act 1988 provides that the author of a computer-generated work, where there is no human author, is ‘the person by whom the arrangements necessary for the creation of the work are undertaken.’ Section 178 of the Act defines a computer-generated work as one ‘generated by computer in circumstances such that there is no human author of the work.’
This provision, unique in its explicit statutory recognition of computer-generated authorship among major copyright jurisdictions, gives copyright protection to the output of a computer program even when no human author is identifiable. The UK copyright law position, therefore, differs from the US position, where the Copyright Office has consistently refused to register works with no human authorship, and from the EU position, where copyright requires human intellectual creation.
The problem is that the 1988 provision was written for a world of algorithmic tools and expert systems, not for generative AI. When a designer uses Photoshop’s clone stamp tool, the person making the arrangements is clearly the human using the software. When a writer prompts GPT-4 with a brief and receives 3,000 words, the question of who makes ‘the arrangements necessary for the creation’ is genuinely ambiguous. The Intellectual Property Office’s 2022 consultation on AI and intellectual property acknowledged this ambiguity and has not yet resolved it.
Who Owns an AI-Generated Work in Practice
In the absence of definitive case law or statutory clarification, the practical position for UK creators and businesses using generative AI tools is determined by a combination of the CDPA’s general framework and the terms of service of the AI platforms they use.
Where a human makes sufficient intellectual choices in the creation process, selecting sources, refining prompts, editing outputs, and making directorial decisions about the final work, the human contributor is likely to be the author, and the work is likely to attract conventional copyright protection. The more automated the generation process and the less human intellectual contribution to the specific expression of the output, the weaker the copyright claim.
The commercial AI platform’s terms of service are often the most practically significant document. OpenAI’s terms assign ownership of outputs to the user, subject to the user’s compliance with usage policies. Adobe’s Firefly generates images trained on licensed content and assigns commercial rights to users. Midjourney’s terms are structured differently for paid and free users, with different licensing conditions for commercial use. These contractual arrangements create ownership rights that exist independently of the copyright analysis, but they do not resolve what happens when an AI output closely resembles a specific human work included in the training data.
The Training Data Question
The most commercially significant question in UK AI copyright is not who owns AI outputs, but whether training AI on copyrighted material without the rightsholder’s consent is lawful. This is the question that underlies litigation in multiple jurisdictions, including the case brought by Getty Images against Stability AI in the UK High Court in 2023.
The UK position on this question has changed significantly since 2022. The previous government’s intellectual property office proposed a broad text and data mining (TDM) exception in 2022 that would have permitted AI training on any lawfully accessed material without consent or payment. The proposal was withdrawn in 2023 following intense opposition from the creative industries, including the Publishers Association, the Authors’ Licensing and Collecting Society, and the British Phonographic Industry, all of whom argued that a broad TDM exception would effectively allow AI companies to build commercial products on the creative work of UK artists without compensation.
The current position is that the existing TDM exception in UK law, contained in section 29A of the CDPA as inserted by the Enterprise and Regulatory Reform Act 2013, permits TDM only for non-commercial research purposes. Commercial AI training on copyrighted material remains a legally contested area in the UK. The government’s AI Action Plan, published in January 2025, acknowledged the need to resolve this question and committed to further consultation, but had not legislated a solution as of the beginning of 2026.


What the Creative Industries Are Asking For
The creative industries’ position on AI and copyright has coalesced around 3 core asks, articulated through the Creators’ Rights Alliance, the Authors’ Licensing and Collecting Society, and the Copyright Licensing Agency.
The first ask is transparency: mandatory disclosure by AI developers of the copyrighted works used in training datasets. This would allow rightsholders to know whether their work has been used, and to quantify any commercial loss. The second ask is consent: an opt-in or opt-out mechanism that gives rightsholders control over whether their works can be used for AI training. The third ask is remuneration: where AI systems are trained on copyrighted works and commercially deployed, rightsholders should receive payment, either directly or through a collective licensing mechanism.
The AI industry’s counter-position is that existing copyright law already provides the necessary framework, that the TDM question should be resolved by preserving and potentially extending the non-commercial research exception, and that mandatory licensing would impose costs that would constrain AI development in the UK, pushing activity to jurisdictions with more permissive regimes.
Where the Legal Framework Stands in 2026
In 2026, UK copyright law’s application to AI-generated works remains in a period of active uncertainty. The Getty Images case is working its way through the UK courts and may produce significant guidance on the TDM question. The Intellectual Property Office’s consultation process is ongoing. The AI Action Plan’s commitment to resolve the training data question through legislation or licensing frameworks has not yet translated into a Bill.
For UK creators, the practical implications are clear: AI platforms that generate content similar to existing works may infringe copyright where the training data includes those works and where the output reproduces their expression in identifiable ways. The threshold for infringement is substantiality of copying, not mechanical reproduction; an AI image that captures the distinctive style of a named illustrator’s work without copying specific pixels may or may not infringe, depending on the degree of substantial similarity and whether style per se is protected (which it generally is not under UK law).
For UK businesses using AI generation tools commercially, the risk of downstream copyright liability depends on the AI platform’s terms and the nature of the output. Images generated by a platform that has licensed its training data (as Adobe’s Firefly claims to do) carry less risk than those from platforms with opaque training provenance.
Fun fact: The Copyright, Designs and Patents Act 1988 contains a provision specifically addressing computer-generated works that predates the World Wide Web. Section 178 of the Act, which defines the author of a computer-generated work as “the person by whom the arrangements necessary for its creation are undertaken”, was written when “computer-generated” meant statistical modelling and early rule-based expert systems, not the large language models and diffusion models that now create commercial content at scale.
What to Watch
The UK copyright law position on AI-generated works will be substantially clarified by 2 near-term developments: the outcome of the Getty Images case in the UK High Court, and the government’s legislative response to its consultation on AI and intellectual property. If the court finds in Getty’s favour on the TDM question, the pressure on the government to legislate a licensing framework will increase substantially, and the UK’s position on AI and creative industries will become a live political issue in a way that it has not yet been.
The cultural stakes are not merely economic. The question of whether UK creators can sustain livelihoods in a market where AI can replicate the surface characteristics of their work at near-zero marginal cost is a question about the future of British creative culture, not just about intellectual property law. Zara’s territory is exactly that intersection, and it will be a defining policy debate for the rest of the decade.




