British Technology Companies and Child Safety Agencies to Test AI's Ability to Create Abuse Content
Tech firms and child protection agencies will be granted permission to assess whether artificial intelligence tools can produce child abuse images under new UK legislation.
Substantial Increase in AI-Generated Harmful Content
The declaration coincided with findings from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the government will allow designated AI companies and child safety groups to inspect AI models – the underlying technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from producing depictions of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now identify the danger in AI systems early."
Addressing Legal Obstacles
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is designed to averting that problem by helping to stop the creation of those images at their origin.
Legislative Framework
The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI systems designed to generate exploitative content.
Real-World Consequences
This recently, the minister toured the London base of a children's helpline and heard a mock-up call to advisors featuring a report of AI-based abuse. The call depicted a teenager seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I hear about young people experiencing extortion online, it is a source of intense frustration in me and rightful concern amongst parents," he stated.
Concerning Data
A leading internet monitoring organization reported that cases of AI-generated exploitation material – such as webpages that may include numerous images – had significantly increased so far this year.
Instances of category A material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "represent a crucial step to guarantee AI tools are safe before they are released," stated the chief executive of the internet monitoring foundation.
"AI tools have enabled so victims can be victimised repeatedly with just a few clicks, providing offenders the capability to create potentially limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which further exploits victims' trauma, and renders children, especially girls, less safe both online and offline."
Counseling Interaction Information
Childline also published details of support sessions where AI has been referenced. AI-related harms discussed in the sessions comprise:
- Using AI to evaluate weight, body and looks
- AI assistants dissuading young people from consulting trusted guardians about abuse
- Being bullied online with AI-generated material
- Online blackmail using AI-manipulated images
Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing AI assistants for assistance and AI therapy applications.