British Technology Firms and Child Protection Officials to Examine AI's Capability to Generate Abuse Images
Technology companies and child protection organizations will be granted authority to evaluate whether AI tools can produce child abuse material under recently introduced British laws.
Substantial Rise in AI-Generated Harmful Material
The announcement came as findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the changes, the authorities will permit designated AI companies and child protection organizations to examine AI models – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient safeguards to stop them from producing images of child exploitation.
"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."
Tackling Legal Obstacles
The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that issue by enabling to halt the production of those materials at their origin.
Legislative Framework
The amendments are being added by the government as modifications to the criminal justice legislation, which is also establishing a ban on owning, creating or sharing AI models developed to generate child sexual abuse material.
Practical Impact
This week, the official visited the London headquarters of a children's helpline and heard a simulated call to counsellors involving a account of AI-based abuse. The interaction portrayed a adolescent requesting help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I learn about young people experiencing extortion online, it is a source of intense frustration in me and justified concern amongst parents," he said.
Concerning Statistics
A prominent online safety organization reported that cases of AI-generated exploitation material – such as online pages that may contain multiple images – had more than doubled so far this year.
Instances of the most severe content – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
- Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a vital step to guarantee AI products are safe before they are launched," commented the chief executive of the internet monitoring organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, giving criminals the ability to create potentially limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which additionally exploits victims' trauma, and makes young people, especially girls, less safe on and off line."
Support Interaction Information
Childline also published information of support interactions where AI has been mentioned. AI-related risks discussed in the sessions comprise:
- Using AI to evaluate body size, physique and appearance
- Chatbots dissuading young people from talking to trusted guardians about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-faked images
During April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and associated terms were mentioned, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellness, including utilizing chatbots for support and AI therapeutic applications.