Skip to main content

UK: Legal issues with deepfakes

31 July 2023

This article was first published by OneTrust DataGuidance.

Deepfakes are fake videos or images. There is a blurred line between old photoshopped images and deepfakes; if there is a distinction, it may be that deepfakes are created using artificial intelligence (AI). The term is often used to describe images where the likeness of one person is imposed on a film or photograph of another person. It is also used more generally to describe images of real people created by AI. Finally, some images of totally fake people are termed 'deepfakes.'

In this Insight article, Giles Parsons, Loren Hogetts, and Richard Nicholas, from Browne Jacobson LLP, lay out some of the current key issues as of April 2023.


To create a deepfake, one must start with an image of a person. There will be a copyright in that image, which will most likely belong to the person who took the image. In this regard, there is ongoing litigation about the materials AI has been trained on, and the authors think that training an AI on a copyright work will usually amount to copyright infringement unless there is either a license in place or a statutory defense. There is no statutory defense yet in the UK, so training AI on someone else's photographs or video without a license is likely copyright infringement. Of course, the subject of a photograph or film will often be different from the copyright owner.

Personal data

A photograph of someone contains personal data: a photograph of a person is information relating to an identified or identifiable natural person. A photograph of someone can obviously be used to identify them. And so a deepfake of someone which identifies them contains that person's personal data.

Under the General Data Protection Regulation (GDPR) (and the UK General Data Protection Regulation (UK GDPR)), personal data used to create deepfakes is likely to constitute a use of personal data that an individual has not consented to. 'Consent' is not always necessary if one can show that there was 'legitimate interest;' however, 'legitimate interest' requires a consideration of the 'rights and freedoms' of the individual concerned.

Could there ever be a situation where there is a legitimate interest that would trump a person's rights and freedoms in the case of a 'faked' photograph? And would a person's right to object carry more significant weight than for the original photograph (which the individual might have at least been aware of and assumptions about the context of the photograph). The journalism exemption in the Data Protection Act 2018 may give some scope for use where there is a public interest; however, the Information Commissioner's Office's (ICO) guidance on the exemption focuses on 'accuracy' and 'fairness,' which may be tested with deepfakes.


If pictures tell a thousand words, deepfakes say things which are not true.

A statement is defamatory if it tends to lower the claimant in the estimation of right-thinking people generally, and if the imputation would have a substantially adverse effect on the way that people would treat the claimant. Some deepfakes will meet that threshold.

False endorsement

The common law tort of passing off has adapted in the last 30 years to encompass false endorsement.

Private information

Misuse of private information is a tort. In the recent case of FGX v Gaunt, the claimant sued for infringement of privacy and misuse of private information. The defendant had taken covert recordings of the claimant whilst naked and uploaded them to the internet. Judgment was given as no defence was filed, and at a hearing, damages were subsequently assessed at £97,000. Misuse of private photographs or videos to create a deepfake could similarly give rise to claim. The Copyright, Designs and Patents Act 1988 also contains at Section 85 on a moral right in private photographs; if someone commissions a photograph for private and domestic purposes, they have the right not to have copies of the work issued to the public. This could also be engaged by deepfakes.

Online Safety Bill

At the time of writing, the Online Safety Bill is currently before Parliament. The latest version of the Bill includes proposed amendments to the Sexual Offences Act 2003 to criminalize pornographic deepfakes. It also includes a 'false communications offence,' which applies when a person knowingly sends false communications which they intend to cause non-trivial psychological or physical harm.

Draft AI Act

The latest draft of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (the draft AI Act) states that, 'Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ("deep fake"), shall disclose that the content has been artificially generated or manipulated.'

At the moment, creating a deepfake is very easy. For example, many of the images created on Midjourney use existing images as part of the prompt. The authors do not know how effective this provision will be; images created by users outside the EU would be subject to this. But any images they created would still be visible, for example on social media. And if a watermark was, for example, only applied in a corner, it could be cropped out.

Key contact

Key contact

Giles Parsons


+44 (0)20 7337 1505

View profile Connect on LinkedIn
Can we help you? Contact Giles

Richard Nicholas


+44 (0)121 237 3992

View Profile Connect on Linkedin
Can we help you? Contact Richard

Related expertise

You may be interested in...