Five years ago, a woman known only as Jodie opened an anonymous email directing her to a website where sexually explicit images of her, images she never posed for, images fabricated by AI, had been published alongside her personal details. AsLBCfirst reported, the perpetrator turned out to be her closest friend. When she went to the police, they told her no crime had been committed.

That legal void is now closing. On 18 February, Prime Minister Keir Starmer announced the government would amend the Crime and Policing Bill to require tech platforms to take down intimate images shared without consent, including AI-generated deepfakes, within 48 hours of a report. According toLBC, companies that fail to comply face hefty fines or have their services blocked in the UK.

The timing is no coincidence. Britain has spent early 2026 convulsed by adeepfake crisisthat made the threat of AI abuse viscerally real. In late December, Elon Musk's Grok chatbot, embedded within the social media platform X, began fulfilling user requests to digitally undress women and girls.

Areport by the Centre for Countering Digital Hatefound that Grok produced an estimated three million sexualised images in barely eleven days. Around two per cent of those analysed appeared to depict minors.

Malaysia and Indonesia temporarily blocked Grok. The European Commission opened an investigation under the Digital Services Act, and Ofcom launched its own inquiry. AsAl Jazeerareported,Starmer labelled the images 'disgusting' and 'unlawful,'telling X to 'get a grip.'

What makes this legislative push striking is its reach beyond takedown speed. The Department for Science, Innovation and Technology confirmed that victims would report an image only once for it to be removed across multiple platforms, with automatic deletion if anyone tries to repost. Ofcom is also considering classifying such images alongside child sexual abuse material and terrorism content, a designation mandating digital fingerprinting and proactive blocking.

Why is the UK government so fascist?https://t.co/sRg979MTQx

In a new interview,@georgiaharisonxshares how our Revenge Porn Helpline practitioners continue to work to remove non-consensually shared intimate images online."She does everything she can to try to get it taken down from all of these different places. They’re absolutely…pic.twitter.com/QrQ0tqNOgM

Not everyone is satisfied that the clock is ticking fast enough. Speaking toThe Register, Hanna Basha, the lawyer who represented television personality Georgia Harrison in her civil revenge pornography case, welcomed the measure but questioned its urgency. 'Why 48 hours and not 24 or even 12?' she asked. 'Every hour these images remain online compounds the harm.' She also raised a more basic problem: many victims cannot even find where to report abusive content.

The amendment sits within a rapidly thickening layer of legislation. On 6 February, asOlliers Solicitorsdetailed, a separate offence under the Data (Use and Access) Act 2025 came into force,criminalising the creation of nonconsensual intimate deepfakes, not merely their distribution, carrying a potentially unlimited fine. The Bill will also outlaw nudification tools: apps designed to strip clothing from images using AI.

Source: International Business Times UK