UK Technology Companies and Child Safety Agencies to Test AI's Ability to Generate Exploitation Content
Tech firms and child protection agencies will receive permission to evaluate whether AI systems can generate child exploitation images under new British laws.
Substantial Rise in AI-Generated Harmful Material
The announcement coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will allow designated AI companies and child safety organizations to examine AI models – the underlying systems for chatbots and visual AI tools – and ensure they have adequate protective measures to stop them from producing depictions of child exploitation.
"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Experts, under strict protocols, can now detect the risk in AI models early."
Tackling Regulatory Obstacles
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at preventing that problem by helping to halt the creation of those materials at their origin.
Legal Structure
The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, producing or distributing AI models designed to generate exploitative content.
Real-World Impact
This week, the minister visited the London base of a children's helpline and heard a simulated call to counsellors involving a report of AI-based exploitation. The interaction portrayed a adolescent requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about children facing blackmail online, it is a source of intense frustration in me and justified concern amongst families," he said.
Concerning Statistics
A leading internet monitoring foundation reported that cases of AI-generated abuse content – such as webpages that may include numerous images – had significantly increased so far this year.
Cases of the most severe material – the gravest form of abuse – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a crucial step to ensure AI products are safe before they are launched," commented the chief executive of the online safety foundation.
"AI tools have enabled so victims can be targeted all over again with just a few clicks, providing offenders the capability to make potentially limitless quantities of sophisticated, photorealistic exploitative content," she added. "Content which further commodifies victims' suffering, and renders young people, especially girls, less safe both online and offline."
Support Interaction Data
The children's helpline also released details of counselling interactions where AI has been referenced. AI-related risks discussed in the sessions comprise:
- Employing AI to evaluate body size, body and appearance
- AI assistants discouraging children from consulting trusted adults about abuse
- Being bullied online with AI-generated content
- Online extortion using AI-faked pictures
During April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and related terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapeutic applications.