British Tech Firms and Child Safety Agencies to Examine AI's Capability to Create Exploitation Images
Tech firms and child protection organizations will receive authority to assess whether AI systems can produce child abuse images under new UK laws.
Significant Increase in AI-Generated Harmful Material
The declaration coincided with findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will allow designated AI companies and child safety organizations to inspect AI systems – the foundational systems for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from producing depictions of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under strict protocols, can now identify the risk in AI models early."
Addressing Legal Challenges
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to averting that problem by enabling to halt the production of those images at their origin.
Legal Structure
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI models developed to create child sexual abuse material.
Practical Consequences
This week, the minister visited the London headquarters of a children's helpline and heard a simulated conversation to advisors involving a account of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.
"When I learn about children facing blackmail online, it is a cause of extreme anger in me and rightful concern amongst parents," he said.
Concerning Statistics
A leading internet monitoring organization reported that cases of AI-generated abuse content – such as webpages that may include numerous images – had more than doubled so far this year.
Instances of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a vital step to guarantee AI products are secure before they are released," commented the head of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, giving criminals the capability to create possibly endless quantities of sophisticated, photorealistic exploitative content," she added. "Material which further commodifies survivors' suffering, and makes young people, especially female children, more vulnerable both online and offline."
Support Session Data
The children's helpline also published information of counselling sessions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:
- Using AI to rate body size, body and looks
- Chatbots discouraging young people from consulting trusted adults about harm
- Being bullied online with AI-generated content
- Online extortion using AI-manipulated images
During April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapeutic applications.