On 4 November 2025, the High Court’s much-awaited judgment was handed down in Getty Images (US) Inc & Ors v Stability AI Limited [2025] EWHC 2863 (Ch), largely dismissing Getty Image’s claims for IP infringement despite making some limited findings of trade mark infringement.
Analysis of the judge’s reasoning makes clear the need for a modern legislative structure in the UK that supports both AI development and fair compensation and control for creators. The current regime provides adequate support for neither. While this is hardly surprising, given its origins long-predating realistic prospects of usable artificial intelligence, Joanna Smith J’s judgement in Getty Images v Stability AI makes paints plainly the need for update, balancing all the interests in play.
What was addressed in the judgment?
The judgment ended up not addressing as much as had been contemplated right up until trial. This is not unusual in an intellectual property dispute, as facts emerge and parties re-evaluate the strength of aspects of their pleaded case. But it means that the legal points determined were less comprehensive than they might have been in a dispute testing the interface between copyright law and the training and operation of AI models.
Stability’s case was always that the training of its deep learning AI model (known as ‘Stable Diffusion’ or ‘the model’) took place outside the UK. Getty Images eventually acknowledged that there was no evidence to the contrary and abandoned is claim for copyright infringement addressing the training and development process. The High Court’s judgment therefore did not address whether, in the course of the training and development of Stable Diffusion, (non-UK) copyright in Getty Images’ material had been infringed.
It was, however, common ground that for training purposes it would have been necessary to download and store the images concerned.
The copyright issue that remained to be determined was whether, by making Stable Diffusion available to users, Stability AI was importing or dealing with an ‘infringing copy’ of a Getty Images’ copyright work, thereby attracting liability for secondary infringement of copyright pursuant to the Copyright Designs and Patents Act 1988 (CDPA) section 22 or 23.
Getty Images also pursued claims of passing off and trade mark infringement (the full suit): double identity infringement pursuant to the Trade Marks Act 1994 (TMA) section 10(1); likelihood of confusion infringement pursuant to TMA s.10(2); and infringement by use of sign harming a mark with a reputation pursuant to TMA s.10(3). The passing off and trade mark infringement claims were brought on the basis of the reproduction (or approximate reproduction), in some images generated by Stable Diffusion, of Getty Images’ watermark (the Getty Images watermark is overlaid on images on Getty Images’ websites until the image has been properly licensed, when a version of the image without the watermark is made available to the user).
Unusually for a trade mark dispute, the case involved extensive expert evidence and even experiments on the training and operation of Stable Diffusion, which informed the judge’s analysis of how well-established principles of trade mark law should be applied on the facts of the case. Wider legal points in issue included whether Stability AI was the legal entity responsible for some acts complained of and (obiter) issues of copyright subsistence, ownership and licensing, and whether additional damages were merited.
In this commentary, we focus on the judge’s determination of the ‘infringing copy’ issue before outlining at a high level the conclusions on the trade mark infringement claims. This is the reasoning from which the inadequacy of the present copyright regime in an AI-era is apparent.
Was Stability AI’s Stable Diffusion an ‘infringing copy’?
The copyright legislation groups different ways of infringing copyright into ‘primary’ and ‘secondary’ types of infringement. Primary types of infringement broadly speaking involve acts of copying within the jurisdiction. The judgment addressed no claim for primary infringement of copyright.
Secondary types of infringement involve importation of (CDPA s.22) or possession of or dealings with (CDPA s.23) an ‘infringing copy’. These were the provisions that Getty Images alleged were infringed. Both s.22 and s.23 set out the acts which infringe in respect of “an article which is, and which [the defendant] knows or has reason to believe is, an infringing copy of the work”. CDPA s.27 sets out the meaning of ‘infringing copy’. It states:
- “(2) An article is an infringing copy if its making constituted an infringement of the copyright work in question.
- (3) An article is also an infringing copy if:
- (a) it has been or is proposed to be imported into the United Kingdom, and
- (b) its making in the United Kingdom would have constituted an infringement of the copyright in the work in question, or a breach of an exclusive licence agreement relating to that work.”
The meaning of the statutory provisions, specifically whether they applied to AI model weights, had not needed to be considered by a court in the UK before. Stability AI argued that its Stability Diffusion was neither an ‘article’ nor an ‘infringing copy’ for these purposes, whereas Getty Images argued that since the development of the model involved unauthorised copying, the resulting model was an ‘infringing copy’ by virtue of s.27(2) or (3).
Joanna Smith J began by noting the general approach to statutory construction, as summarised in Al-Thani v Al-Thani [2025] UKPC 35, drawing in particular on R (Quintavalle v Secretary of State for Health [2003] UKHL 13. Statutory interpretation involves an objective assessment of the meaning which a reasonable legislature as a body would be seeking to convey in using the words being considered. Words and passages in a statute derive their meaning from their context. A phrase or passage must be read in the context of the section as a whole and in the wider context of a relevant group of sections. Other provisions in a statute and the statute as a whole may provide the relevant context. The statute as a whole should be read in the historical context of the situation which led to the statute’s enactment. The court’s task, within the permissible bounds of interpretation, is to give effect to Parliament’s purpose.
Getty Images drew attention to the further well-established principle, explained in News Corp v HMRC [2023] UKSC 7, that, in general, a provision is ‘always speaking’. This means that a statute should be interpreted taking into account changes that have occurred since it was enacted, which may include, for example, technological developments, changes in scientific understanding, changes in social attitudes and changes in the law.
Applying these principles, the judge concluded that an electronic copy stored in an intangible medium (such as in a cloud) is capable of being ‘an article’. The words ‘an article’ do not require a tangible form.
However, to be an ‘infringing copy’ the article must, at least at some point, contain or store an infringing copy of the copyright work. It is not enough that copying occurred in relation to or coinciding with the making of the article complained of.
Understanding the meaning of the statutory provisions in this way, Stability AI’s Stable Diffusion, while capable of being an ‘article’, was not an ‘infringing copy’. This was because while the model weights making up the AI were altered during training by exposure to copyright works, the model weights were not themselves an infringing copy of Getty Images’ copyright works and did not store (and never had stored) an infringing copy, they were purely the product of the patterns and features which they had learnt over time during the training process.
This meant the importation of Stability AI’s Stable Diffusion into the UK, for example through download, was not an act of secondary infringement of copyright.
Did the trade mark claims make up for Getty’s lack of success in its copyright claim?
The judge explained that it was necessary to differentiate between outputs produced by different versions of the Stability Diffusion model because the different versions had not all been trained on the same dataset and different filters had been applied to the training data. It was common ground that Getty Images needed, as a threshold for its infringement claims, to establish on the balance of probabilities, that each version of the model had generated at least one output with a watermark containing a Getty mark and at least one output with a watermark containing an iStock mark, in each case for at least one UK-based user (iStock was a business acquired by and a brand within the Getty Images business).
The expert evidence in the case explained that the likelihood of a watermark appearing depended on at least the frequency with which the watermark appeared in the training data and also the user-specified prompt. In order for a watermark to be produced, it was likely that the model needed to be trained on a diverse set of images/captions each containing a watermark. Certain prompts would generate a watermark with high frequency, while other prompts were unlikely to generate a watermark.
Getty Images’ evidence showed that the Stable Diffusion models could be manipulated to produce watermarks using prompts taken verbatim from Getty Images’ metadata text, but there was no evidence that this had happened in real life. Getty Images’ evidence did not attempt to establish the likelihood of a watermark appearing in response to any given prompt, or of the real-world use of prompts.
In view of the evidence before the court, the judge concluded that the threshold had been met in respect of the Getty Images watermark for Stability Diffusion versions 1.x, 1.2, 1.3, 1.4, 2.0; and 2.1; and in respect of the iStock watermark by versions 1.x, 1.2, 1.3 and 1.4. The threshold was not met for models SD XL and v.1.6 for either watermark: the evidence indicated that the issue had been resolved in these versions of the model by the application of appropriate filters.
Many questions of trade mark law are assessed from the perspective of the ‘average consumer’. In this case the parties agreed that there were at least three categories of average consumer, in which two of them had a relatively high degree of technical competence and understanding. The first would download the codebase and model weights from the GitHub and Hugging Face portals respectively and run the inference offline. The second would run the inference on Stability’s computing infrastructure using an API found online at platform.stability.ai (the Developer Platform). The third would be less technically competent and would use the Stability DreamStudio service to run the inference on Stability’s computing infrastructure through a normal browser and a user account.
After explaining the context in which the average consumer would be operating the Stable Diffusion modules, the judge explained and applied the principles governing the assessment of trade mark infringement pursuant to the TMA s.10(1), s.10(2) and s.10(3).
All of these statutory torts require use of the sign complained of in the course of trade in relation to goods and services. The judge agreed with Getty that the facts in this case involved active behaviour and control on the part of Stability. This was because Stability was the entity that trained the Stable Diffusion model, it was the entity that could have filtered out the watermarked images (the ‘signs’) in order to ensure that the model did not produce outputs bearing watermarks, it made the model available to consumers, and it was the entity making the communication that bore the relevant signs. While users had some degree of control, they did not have complete control over the generation of the signs: it was Stability that was responsible for the model weights and the model weights controlled the functionality of the network; this went beyond creating the technical conditions necessary for use of the sign. Nor would the average consumer regard the output of the model as solely their responsibility. Further, a significant proportion of average consumers would perceive that the production of the Getty Images watermark was in some way connected to Getty Images, perhaps because the Stable Diffusion model had been trained on images licensed for use by Getty Images. The ‘use of a sign in the course of trade in relation to goods and services’ component of each type of infringement was therefore established.
Double identity (s.10(1)) infringement was established in respect of the iStock watermarks generated by users of v1.x (in so far as the models were accessed via DreamStudio and/or the Developer Platform). This finding was based specifically on the example watermarks shown on the ‘Dreaming’ image and the ‘Spaceships’ image, the latter having been generated by Stable Diffusion model v1.2. Double identity infringement was not established in respect of the Getty Images watermarks, for example, because many of the generated signs were such distorted representations of the Getty marks that the identity aspect of the s.10(1) test was not met.
Likelihood of confusion (s.10(2)) infringement was established in respect of the iStock watermark generated by users of v1.x (in so far as the models were accessed via DreamStudio and/or the Developer Platform). This finding was based specifically on the example watermarks shown on the ‘Dreaming’ and ‘Spaceships’ images, the latter having been generated by model v2.1. Likelihood of confusion infringement was also established in respect of Getty Images watermarks generated by users of v2.x, this finding was based specifically on the example watermark on the ‘First Japanese Temple Garden Image’ generated on model v2.1.
The examples for which likelihood of confusion infringement were found involved a very high degree of similarity between the relevant registered trade mark and the watermark sign generated, identity or a high degree of similarity between the goods and services of the registration and use, and an assumption on the part of the average consumer that a generated image bearing a watermark had been supplied by Getty Images, that Stable Diffusion had been trained on Getty Images content under licence, or that there was some other economic link. The judge explained that the analysis was highly fact sensitive. It was impossible to conclude that for every watermark generated by the same version of Stable Diffusion, a similar analysis would apply such that infringement would automatically follow.
Infringement by use of sign harming a mark with a reputation pursuant to TMA s.10(3) was not established (for either watermark on any version). The evidence failed to meet the burden of showing one of the three types of injury required by the provision (detriment to distinctive character, detriment to repute or unfair advantage) had occurred or was seriously likely.
The findings of trade mark infringement were therefore specific and limited. On the evidence in the case, it was impossible to know how many (or on what scale) watermarks would be generated in real life that would fall into a similar category. This was despite the experts in the case agreeing that memorisation of a watermark likely requires multiple exposure to the same watermark during training (regardless of underlying image). Further, Getty Images had no direct tortious liability of acts arising by reason of the release of v.1x via the GitHub and Huggin Face Pages; and no infringement was found in relation to the later SD XL and v1.6 models.
So where does the reasoning in Getty Images v Stability AI leave AI developers and creative industry stakeholders?
The judge’s reasoning demonstrates that trade mark law is not suited to plugging the gap in the copyright legislation that exists in relation to AI.
The current legislation on its face provides copyright owners with a legal mechanism to prohibit or be compensated for use of their works in training AI where the training takes place in the UK. However, where training takes place outside the UK, Joanna Smith J’s reasoning suggests that there is little that copyright owners can do in the UK to prevent the importation into or use within the jurisdiction of AI trained outside the jurisdiction on copyright protected material without the consent of the copyright owner, or to be compensated for the use of the AI within the UK.
It is possible that the Court of Appeal would take a different view to the judge on any appeal. There are also a couple of caveats to the points made in the paragraph immediately above. First, if someone used AI to produce a work in the UK that substantively copied a work on which the AI was trained, a copyright infringement claim could be brought. There might be complexities, for example in establishing what the AI was trained on, but the legislation would enable such a claim. Second, the Getty Images judgment did not consider a case based on CDPA section 296ZA, which provides redress where effective technological measures have been applied to a copyright work and a person does something which circumvents them. If a content creator’s work is password protected and governed by conditions of use, scaping of the content in training and reproduction of it in use might provide a route for liability that could be explored in respect of AI trained outside the jurisdiction and imported into it. The point was not run by Getty Images in its claim against Stability AI though and the option is not straightforward.
The reality is that a case brought under trade mark law, based on memorisation and generation of marks on which the AI was trained, will need complex technical evidence for infringement to be established on any meaningful scale and it may be challenging to obtain meaningful compensation. Trade mark law is not an adequate mechanism for addressing the concerns of copyright owners, but it was never designed to be.
The current UK legislation (at least on the judge’s construction of it) therefore achieves the worst of two worlds:
- It tells AI developers that it is a safer path for them to train their AI outside the jurisdiction than within it. This does not encourage investment in the AI sector in the UK.
- UK copyright owners have no recourse through the courts against AI developers in respect of the use of their works in the training of AI which ultimately becomes used in the UK (unless the AI itself contains or once contained a copy of the work).
What next?
Change is needed, in order to support both AI development within the UK and fair compensation and control for creative sector rights holders.
The Government’s AI/copyright consultation, launched in December 2024, sought to create a balance, ideally enabling rights holders to reserve their rights and ringfence material from AI training unless via licence, excepting from copyright infringement data mining of unreserved lawfully accessible works, requiring greater transparency from AI developers about the material used to train models, enabling compensation for rights holders and establishing a legally level field for AI used within the UK wherever it was trained. The aims were bold and needing imaginative technical solution to become workable.
The proposal was met with loud concern and draft legislation has not emerged. The High Court’s judgment in Getty Images v Stability AI [2025] EWHC 2863 (Ch) should now be a wake-up call on the need for collaboration, compromise, and facilitation of a mechanism enabling all interested parties to thrive.
Our experienced and award-winning IP and trade mark lawyers are here to help. To discuss this topic further, please get in touch.
Ailsa Carter
Professional Development Lawyer
ailsa.carter@brownejacobson.com
+44 (0)330 045 1451