Skip to main content
School of Law

Deepfakes and the Law: Why Britain needs stronger protections against technology-facilitated abuse

As the UK moves to outlaw sexually explicit deepfakes, Professor Julia Hörnle examines the urgent need for stronger legal protections.

Published:
A woman looking into a screen with dots mapping her face,

As Britain prepares to criminalise the creation of sexually explicit deepfakes, the government’s announcement marks an important step forward in addressing online violence against women and girls (VAWG). However, this measure alone will not fully protect victims from the broader harm posed by technology-facilitated abuse. A stronger, more comprehensive legal framework is urgently needed to address this rising threat.

On 7 January, the government unveiled plans to outlaw the non-consensual creation of sexually explicit deepfakes in its forthcoming Crime and Policing Bill. The legislation will also criminalise taking non-consensual intimate images and installing equipment to capture such images, expanding on existing offences like voyeurism and upskirting. These measures reflect the government’s commitment to fulfilling Law Commission recommendations and addressing intimate image-based abuse. But as the prevalence of online sexual abuse grows, so too must the ambition of our laws.

Technology as an Enabler of Misogyny

The digital age has amplified misogynistic behaviour. Social media platforms and algorithms have been shown to promote extreme misogynistic content, often targeting impressionable young men. This phenomenon, sometimes described as “misogynist radicalisation,” parallels the methods used to recruit individuals to terrorism. Artificial intelligence tools that facilitate the creation of sexually explicit deepfakes represent a weapon in this arsenal of technology-facilitated abuse, enabling new forms of harm against women and girls.

Statistics highlight the scale of the problem. The National Police Chiefs’ Council reported in 2024 that VAWG-related crimes had risen by 37% between 2018 and 2023, with one in 12 women affected. Technology was used in 40% of these offences. Child sexual exploitation and abuse (CSEA) cases have also surged, with more than half of such offences committed by underage boys against underage victims. Deepfake tools exacerbate these issues, allowing perpetrators to turn everyday images into sexually explicit material without consent, thereby invading victims’ private lives and causing immense psychological harm.

The Case for Criminalising Deepfake Creation and Tools

While the Online Safety Act 2023 introduced offences prohibiting the sharing and threatening to share intimate images—including deepfakes—it does not currently criminalise the creation of such images without consent. This gap leaves victims vulnerable to harm even if the content is never shared publicly. The mere existence of sexually explicit deepfakes can damage a person’s mental health, professional reputation, and ability to participate in online spaces.

Moreover, it is crucial to address the availability of AI tools used to create deepfakes. Most perpetrators lack the technical expertise to create such content without these readily accessible technologies. Criminalising the development, distribution, and promotion of these tools would significantly curtail the ability to produce deepfakes and prevent harm at its source.

Expanding Protections for Non-Consensual Intimate Images

Current laws addressing the non-consensual taking of intimate images are limited to narrowly defined offences such as voyeurism and upskirting. However, they fail to address the broader issue of possessing such images. For victims, the permanent deletion of intimate images is critical to ending the ongoing violation of their privacy. Possession laws would empower victims by ensuring that offenders cannot retain malicious power over them.

A Call to Action

Deepfakes are not just a disturbing by-product of technological advancement; they are a malign expression of extreme misogyny and part of a continuum leading to real-world violence. Britain’s new laws represent progress, but they must go further. Criminalising the creation and possession of sexually explicit deepfakes and the tools used to produce them would provide much-needed protections for victims. Without such measures, we risk failing those most vulnerable to the harms of technology-facilitated abuse.

Policymakers must act decisively to close these gaps and ensure the law keeps pace with the evolving challenges of the digital age. Only then can we effectively combat the epidemic of violence against women and girls and create a safer, more equitable society.

 

 

Back to top