British Tech Companies and Child Protection Agencies to Examine AI's Ability to Generate Exploitation Images

Tech firms and child safety organizations will be granted authority to evaluate whether AI systems can produce child exploitation material under new British legislation.

Significant Increase in AI-Generated Illegal Content

The declaration came as findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the government will permit approved AI companies and child safety organizations to inspect AI models – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to stop them from producing depictions of child sexual abuse.

"Fundamentally about stopping exploitation before it occurs," stated the minister for AI and online safety, adding: "Experts, under strict conditions, can now detect the risk in AI models promptly."

Tackling Regulatory Obstacles

The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This law is designed to preventing that issue by enabling to stop the production of those images at their origin.

Legal Framework

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or distributing AI models developed to generate exploitative content.

Practical Impact

This recently, the minister toured the London headquarters of a children's helpline and listened to a mock-up conversation to advisors featuring a report of AI-based abuse. The interaction depicted a teenager seeking help after facing extortion using a explicit deepfake of himself, created using AI.

"When I learn about children facing blackmail online, it is a cause of extreme frustration in me and justified anger amongst families," he said.

Concerning Statistics

A leading online safety organization reported that cases of AI-generated exploitation material – such as webpages that may include multiple images – had significantly increased so far this year.

Cases of the most severe content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a vital step to ensure AI tools are secure before they are launched," stated the head of the online safety organization.

"AI tools have made it so victims can be victimised repeatedly with just a simple actions, providing criminals the capability to make possibly limitless amounts of advanced, lifelike child sexual abuse material," she added. "Material which additionally exploits survivors' trauma, and makes children, particularly girls, less safe on and off line."

Counseling Session Information

Childline also published details of support sessions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:

  • Using AI to rate weight, physique and looks
  • Chatbots discouraging young people from talking to trusted guardians about harm
  • Being bullied online with AI-generated material
  • Online extortion using AI-manipulated pictures

During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year.

Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for assistance and AI therapy apps.

Valerie Cline
Valerie Cline

Elara is a wellness coach and writer passionate about holistic living and mindfulness, sharing evidence-based advice for everyday well-being.