British Technology Companies and Child Protection Agencies to Examine AI's Capability to Create Exploitation Content

Tech firms and child safety organizations will be granted authority to assess whether AI tools can generate child abuse images under new British laws.

Significant Increase in AI-Generated Illegal Content

The declaration came as findings from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the authorities will allow approved AI developers and child safety groups to examine AI systems – the foundational systems for chatbots and visual AI tools – and verify they have sufficient safeguards to stop them from producing depictions of child sexual abuse.

"Fundamentally about preventing abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now identify the risk in AI models promptly."

Tackling Legal Challenges

The amendments have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at averting that issue by helping to stop the creation of those images at source.

Legal Framework

The changes are being added by the government as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, producing or distributing AI systems developed to create exploitative content.

Practical Consequences

This week, the official toured the London base of Childline and listened to a simulated call to advisors featuring a report of AI-based exploitation. The call depicted a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, created using AI.

"When I hear about young people experiencing blackmail online, it is a source of intense frustration in me and rightful anger amongst parents," he said.

Alarming Statistics

A leading online safety organization stated that instances of AI-generated exploitation content – such as webpages that may contain numerous images – had significantly increased so far this year.

Instances of the most severe content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, making up 94% of illegal AI depictions in 2025
  • Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to ensure AI products are safe before they are released," stated the chief executive of the online safety organization.

"AI tools have made it so victims can be targeted repeatedly with just a simple actions, giving criminals the capability to make possibly endless amounts of advanced, photorealistic exploitative content," she added. "Material which additionally commodifies survivors' suffering, and makes children, particularly girls, less safe on and off line."

Support Interaction Information

The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:

  • Employing AI to evaluate body size, body and looks
  • AI assistants discouraging young people from talking to safe adults about harm
  • Being bullied online with AI-generated content
  • Online extortion using AI-faked images

During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapy applications.

Brandy Wright
Brandy Wright

Lena is a tech journalist with over a decade of experience covering consumer electronics and emerging technologies.