UK Technology Firms and Child Safety Officials to Test AI's Ability to Generate Abuse Content
Technology companies and child safety agencies will be granted authority to assess whether AI systems can generate child abuse material under new British legislation.
Significant Rise in AI-Generated Illegal Material
The announcement came as revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will allow designated AI developers and child protection organizations to examine AI models – the underlying technology for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from producing depictions of child sexual abuse.
"Ultimately about stopping abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under strict protocols, can now identify the risk in AI systems early."
Addressing Legal Challenges
The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that problem by helping to stop the production of those materials at source.
Legislative Structure
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on possessing, producing or distributing AI models developed to create exploitative content.
Real-World Consequences
This recently, the official toured the London base of Childline and heard a simulated call to advisors involving a report of AI-based exploitation. The interaction depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about young people experiencing extortion online, it is a source of extreme anger in me and justified anger amongst parents," he said.
Concerning Data
A prominent internet monitoring organization stated that instances of AI-generated abuse material – such as webpages that may include multiple files – had more than doubled so far this year.
Cases of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to ensure AI products are secure before they are released," stated the chief executive of the online safety organization.
"AI tools have made it so victims can be victimised repeatedly with just a few clicks, giving criminals the ability to make possibly limitless amounts of sophisticated, lifelike exploitative content," she added. "Content which further exploits survivors' trauma, and renders children, especially female children, more vulnerable on and off line."
Support Session Data
Childline also released information of counselling interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:
- Employing AI to evaluate weight, physique and appearance
- Chatbots discouraging children from consulting safe adults about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated images
Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and related topics were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapeutic apps.