British Technology Companies and Child Protection Agencies to Examine AI's Ability to Generate Abuse Content
Technology companies and child safety organizations will be granted authority to evaluate whether AI systems can produce child exploitation material under new British legislation.
Significant Increase in AI-Generated Illegal Content
The announcement came as revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will allow approved AI companies and child safety organizations to inspect AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have sufficient safeguards to stop them from creating images of child sexual abuse.
"Ultimately about stopping abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous protocols, can now detect the danger in AI systems early."
Addressing Regulatory Challenges
The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that problem by helping to halt the production of those materials at their origin.
Legal Framework
The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on owning, producing or distributing AI systems designed to create exploitative content.
Real-World Impact
This week, the minister toured the London base of Childline and listened to a simulated conversation to advisors featuring a report of AI-based abuse. The call portrayed a teenager seeking help after facing extortion using a sexualised AI-generated image of themselves, created using AI.
"When I hear about young people facing extortion online, it is a source of intense frustration in me and justified anger amongst families," he stated.
Alarming Statistics
A prominent online safety organization reported that cases of AI-generated abuse material – such as online pages that may include multiple files – had significantly increased so far this year.
Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a crucial step to ensure AI tools are safe before they are launched," commented the head of the internet monitoring foundation.
"AI tools have enabled so victims can be targeted all over again with just a few clicks, providing criminals the capability to create possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which additionally commodifies survivors' suffering, and makes young people, especially girls, more vulnerable both online and offline."
Counseling Interaction Information
Childline also released information of counselling sessions where AI has been mentioned. AI-related harms mentioned in the conversations comprise:
- Using AI to rate body size, physique and appearance
- AI assistants dissuading children from talking to safe guardians about abuse
- Being bullied online with AI-generated content
- Digital blackmail using AI-faked pictures
During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and related topics were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, including using AI assistants for support and AI therapy applications.