Britain's first line of defence against child abuse images has warned that it risks being overwhelmed by a flood of fake content generated by artificial intelligence.
The Internet Watch Foundation (IWF), which monitors cases of child sexual abuse material and works to block illegal content, said real victims would be put at increased risk if its staff were deluged by industrialised production of fake images.
Police and politicians have warned that lifelike pictures created by paedophiles using AI image generation programs are a growing threat.
The IWF is responsible for tracking down thousands of illegal images of child abuse each week, alerting police when it suspects that a child could be in danger and alerting overseas counterparts when an image is hosted abroad.
Dan Sexton, the organisation's chief technical officer, said: "Our focus is on protecting children. If a child can be identified and safeguarded, that is always a priority for analysts.
"If AI imagery of child sexual abuse does become indistinguishable from real imagery, there is a danger that IWF analysts could waste precious time attempting to identify and help law enforcement protect children that do not exist, to the detriment of real victims.
"AI generated imagery is an emerging technology which we are keeping a very close eye on. We know criminals will, and do, abuse any technology they can to distribute and make imagery of child sexual abuse.
"Regardless of how it is created we would still want to find and remove it from the internet. Far from being a victimless crime, this imagery serves to normalise and ingrain the sexual abuse of children in the minds of offenders."
Use of AI image generation tools have exploded in recent months, letting users create photo-realistic images in seconds with just a few written instructions.
While the most prominent programs have introduced restrictions on generating illegal or pornographic material, users have shared guides on bypassing these controls or turned to "open source" alternatives without such restrictions.
ActiveFence, a company that monitors online forums, said it had identified 68 sets of AI-created child abuse images on one website in the first four months of this year, against 25 in the last four months of 2022.
There have also been fears that illegal content may have featured in the datasets of billions of photos used to "train" the software. Fake child abuse images are illegal to own and distribute in Britain.
One content moderation executive said that AI-generated images created a huge problem for investigators because they would not be recognised by software that is used to tell if an illegal image had been reported before.
This would mean a large number of fake images could be treated as brand new, leading to false concerns that a child is in danger.
A spokesman for the National Crime Agency said: "We constantly review the impact that new technologies, such as synthetic media (including that generated using AI), can have on the child sexual abuse threat, and work closely with partners across law enforcement, the wider government and the private sector, to ensure our capabilities can continue to detect and investigate synthetic media as the technology develops."
Baroness Kidron, the child safety campaigner and crossbench peer, has pushed for AI-generated abuse to be included in the Government's online safety bill, which will introduce huge fines for companies that fail to tackle harmful content.
She said she introduced an amendment after police told her that they had seen an explosion in AI-generated abuse images this year.
AI companies said this week that policymakers should focus on an "extinction" threat on the scale of pandemics and nuclear war, which critics said distract from issues that are arising today.
Rishi Sunak is due to discuss AI with Joe Biden in Washington next week.
Sign up to the Front Page newsletter for free: Your essential guide to the day's agenda from The Telegraph - direct to your inbox seven days a week.