UK Technology Companies and Child Safety Agencies to Examine AI's Capability to Generate Exploitation Content
Tech firms and child safety organizations will be granted authority to assess whether artificial intelligence tools can produce child exploitation images under new British laws.
Substantial Rise in AI-Generated Illegal Material
The announcement came as revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will permit designated AI developers and child protection groups to inspect AI systems – the underlying technology for chatbots and visual AI tools – and ensure they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the danger in AI systems early."
Tackling Legal Challenges
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that issue by helping to stop the creation of those images at their origin.
Legal Structure
The changes are being added by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI models developed to generate child sexual abuse material.
Real-World Consequences
This week, the official visited the London headquarters of Childline and listened to a mock-up conversation to counsellors featuring a account of AI-based abuse. The interaction portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, created using AI.
"When I hear about young people facing blackmail online, it is a source of extreme frustration in me and justified concern amongst parents," he stated.
Alarming Data
A leading internet monitoring foundation stated that instances of AI-generated abuse material – such as webpages that may include numerous images – had more than doubled so far this year.
Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a crucial step to guarantee AI products are safe before they are launched," stated the head of the online safety foundation.
"AI tools have made it so survivors can be targeted all over again with just a simple actions, giving offenders the ability to make possibly endless quantities of sophisticated, photorealistic exploitative content," she added. "Material which additionally exploits survivors' suffering, and renders young people, especially female children, more vulnerable on and off line."
Support Interaction Information
Childline also published details of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Employing AI to evaluate weight, physique and looks
- AI assistants discouraging young people from consulting safe adults about abuse
- Facing harassment online with AI-generated material
- Digital extortion using AI-manipulated pictures
Between April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and related topics were discussed, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing utilizing AI assistants for assistance and AI therapeutic apps.