Call us now 01926 402 498
The Internet Watch Foundation (IWF) has reported a deeply concerning rise in realistic AI-generated images of child sexual abuse.
The Internet Watch Foundation (IWF) has reported a deeply concerning rise in AI-generated images of child sexual abuse. These computer-created images – some so realistic they’re almost indistinguishable from photographs – are being shared online in growing numbers, fuelling a disturbing new frontier in abuse.
At Safeline, we believe all forms of child sexual abuse—whether real or digitally created—are harmful. AI should never be used to simulate the abuse of children.
In 2024, the IWF received:
245 reports of AI-generated CSA imagery that broke UK law.
7,644 illegal images and several videos shared online.
A 380% increase in reports compared to 2023.
39% of cases that were Category A – the most severe level, involving sadism or penetration.
The majority of victims depicted were girls.
This content is no longer hidden away. It’s appearing on open platforms, where people – including children – could stumble across it. Many of these images are so lifelike, even experts struggle to tell whether they depict real victims.
“This is not victimless. AI-generated abuse images reinforce harmful sexual fantasies and normalise exploitation. They may not show real children, but they promote real-world harm.”
In response to this growing threat, the UK government has confirmed that it will tighten legislation to criminalise:
The creation, possession, or sharing of AI tools used to generate CSA material.
Instruction manuals that teach people how to use AI to create abuse content.
These measures are part of the Online Safety Act 2023, which aims to hold tech platforms legally accountable for harmful content shared on their services. To support smaller platforms in meeting these new responsibilities, the IWF has also launched Image Intercept – a free tool that blocks known child abuse imagery using a secure database of 2.8 million criminal files.
This is a positive step – but more must be done. Tech moves fast. So must our safeguarding. The law is evolving, but survivors deserve protection that’s proactive, not reactive.
We call on all online platforms, AI developers, and tech companies to:
Build robust safety systems that prevent AI tools being misused to simulate abuse,
Proactively detect and remove illegal and harmful content,
Commit to trauma-informed policies that put survivor wellbeing first.
This is a critical moment to take a stand. AI should never be used to excuse, obscure, or escalate child sexual abuse.
Stay up to date with Safeline’s work by following us on Facebook, Instagram, and Twitter!