
AI-generated images of child sexual abuse are flooding the Internet
Over the past two years, new AI technologies have made it easier for criminals to create explicit images and videos of children. Now, researchers at organisations including the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning of a surge of new material this year that is nearly indistinguishable from actual abuse.
New data released July 10 from the Internet Watch Foundation, a British nonprofit that investigates and collects reports of child sexual abuse imagery, identified 1,286 AI-generated videos of child sexual abuse so far this year globally, compared with just two in the first half of 2024.
The videos have become smoother and more detailed, the organisation's analysts said, because of improvements in the technology and collaboration among groups on hard-to-reach parts of the Internet called the dark web to produce them.
The rise of lifelike videos adds to an explosion of AI-produced child sexual abuse material, or CSAM. In the United States, the National Center for Missing & Exploited Children said it had received 485,000 reports of AI-generated CSAM, including stills and videos, in the first half of the year, compared with 67,000 for all of 2024.
'It's a canary in the coal mine,' said Derek Ray-Hill, interim CEO of the Internet Watch Foundation. The AI-generated content can contain images of real children alongside fake images, he said, adding, 'There is an absolute tsunami we are seeing.'
The deluge of AI material threatens to make law enforcement's job even harder. While still a tiny fraction of the total amount of child sexual abuse material found online, which tallied reports in the millions, the police have been inundated with requests to investigate AI-generated images, taking away from their pursuit of those engaging in child abuse.
Law enforcement authorities say federal laws against child sexual abuse material and obscenity cover AI-generated images, including content that is wholly created by the technology and do not contain real images of children.
Beyond federal statutes, state legislators have also raced to criminalise AI-generated depictions of child sexual abuse, enacting more than three dozen state laws in recent years.
But courts are only just beginning to grapple with the legal implications, legal experts said.
The new technology stems from generative AI, which exploded onto the scene with OpenAI's introduction of ChatGPT in 2022. Soon after, companies introduced AI image and video generators, prompting law enforcement and child safety groups to warn about safety issues.
Much of the new AI content includes real imagery of child sexual abuse that is reused in new videos and still images. Some of the material uses photos of children scraped from school websites and social media. Images are typically shared among users in forums, via messaging on social media and other online platforms.
In December 2023, researchers at the Stanford Internet Observatory found hundreds of examples of child sexual abuse material in a dataset used in an early version of the image generator Stable Diffusion. Stability AI, which runs Stable Diffusion, said it was not involved in the data training of the model studied by Stanford. It said an outside company had developed that version before Stability AI took over exclusive development of the image generator.
Only in recent months have AI tools become good enough to trick the human eye with an image or video, avoiding some of the previous giveaways like too many fingers on a hand, blurry backgrounds or jerky transitions between video frames.
The Internet Watch Foundation found examples last month of individuals in an underground web forum praising the latest technology, where they remarked on how realistic a new cache of AI-generated child sexual abuse videos were. They pointed out how the videos ran smoothly, contained detailed backgrounds with paintings on walls and furniture, and depicted multiple individuals engaged in violent and illegal acts against minors.
About 35 tech companies now report AI-generated images of child sexual abuse to the National Center for Missing & Exploited Children, said John Shehan, a senior official with the group, although some are uneven in their approach. The companies filing the most reports typically are more proactive in finding and reporting images of child sexual abuse, he said.
Amazon, which offers AI tools via its cloud computing service, reported 380,000 incidents of AI-generated child sexual abuse material in the first half of the year, which it took down. OpenAI reported 75,000 cases. Stability AI reported fewer than 30.
Stability AI said it had introduced safeguards to enhance its safety standards and 'is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM.'
Amazon and OpenAI, when asked to comment, pointed to reports they posted online that explained their efforts to detect and report child sexual abuse material.
Some criminal networks are using AI to create sexually explicit images of minors and then blackmail the children, said a Department of Justice official, who requested anonymity to discuss private investigations. Other children use apps that take images of real people and disrobe them, creating what is known as a deepfake nude.
Although sexual abuse images containing real children are clearly illegal, the law is still evolving on materials generated fully by artificial intelligence, some legal scholars said.
In March, a Wisconsin man who was accused by the Justice Department of illegally creating, distributing and possessing fully synthetic images of child sexual abuse successfully challenged one of the charges against him on First Amendment grounds. Judge James Peterson of US District Court for the Western District of Wisconsin said that 'the First Amendment generally protects the right to possess obscene material in the home' so long as it isn't 'actual child pornography.'
But the trial will move forward on the other charges, which relate to the production and distribution of 13,000 images created with an image generator. The man tried to share images with a minor on Instagram, which reported him, according to federal prosecutors.
'The Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat,' said Matt Galeotti, head of the Justice Department's criminal division. – ©2025 The New York Times Company
This article originally appeared in The New York Times.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Sun
21 hours ago
- The Sun
Refugee effigy bonfire sparks outrage in Northern Ireland
MOYGASHEL (Northern Ireland): A model of refugees in a boat, placed on a bonfire in a pro-British town near Belfast, was set alight on Thursday night, weeks after migrants' homes were attacked nearby. The display prompted condemnations by politicians across Northern Ireland's political divides, and police said they were investigating it as a hate incident. Bonfires are lit across the British region in mainly Protestant 'loyalist' neighbourhoods on the eve of the July 12 commemorations of William of Orange's victory over the Roman Catholic King James at the Battle of the Boyne in 1690. Effigies of eight immigrants in life jackets were placed in a model boat alongside an Irish flag on top of the bonfire in the town of Moygashel, 65 km (40 miles) west of Belfast. Banners below the boat read 'Stop the Boats' and 'Veterans before Refugees.' A large crowd gathered, many filming on their phones, as the more than 50-wooden-pallet-tall bonfire was set alight at nightfall on Thursday. A pipe band played music and fireworks were lit beforehand. A member of the regional assembly for Irish nationalists Sinn Féin, Colm Gildernew, called the display 'deplorable' and a 'clear incitement to hatred'. Ulster Unionist Party leader Mike Nesbitt had joined Gildernew and others in calling for the effigies to be removed before the bonfire was lit and said he condemned them 'without reservation'. 'This image is sickening, deplorable and entirely out of step with what is supposed to be a cultural celebration,' Nesbitt, who is the region's health minister, wrote on X. Bonfires and parades to mark July 12 have often prompted violence, even after a 1998 peace deal largely ended three decades of sectarian conflict in Northern Ireland. In June, masked rioters attacked police and set homes and cars on fire in Ballymena, 65 km northeast of Moygashel. - Reuters


The Sun
a day ago
- The Sun
Police probe missing UK teen case after Malaysia arrival
KUALA LUMPUR: Police are investigating the disappearance of a British teenager who entered Malaysia through Kuala Lumpur International Airport (KLIA) on June 7. The case has been classified as a missing person report, with authorities appealing for public assistance. Acting CID Director Datuk Fadil Marsus confirmed the youth, identified as David Renz Galletes Balisong, is believed to still be in the country. 'Active investigations are ongoing to gather all relevant information for case resolution,' he stated in an official release. Authorities have circulated the teenager's photograph to aid search efforts. Fadil cautioned against public speculation that might disrupt investigations, emphasising, 'Members of the public with information are urged to channel it to the police.' Inspector Hazuin Jan Abdul Hamid of KLIA District Police Headquarters is leading the probe. Witnesses may contact the hotline 017-7369187.


The Star
a day ago
- The Star
AI-generated images of child sexual abuse are flooding the Internet
WASHINGTON: A new flood of child sexual abuse material created by artificial intelligence is hitting a tipping point of realism, threatening to overwhelm authorities. Over the past two years, new AI technologies have made it easier for criminals to create explicit images and videos of children. Now, researchers at organisations including the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning of a surge of new material this year that is nearly indistinguishable from actual abuse. New data released July 10 from the Internet Watch Foundation, a British nonprofit that investigates and collects reports of child sexual abuse imagery, identified 1,286 AI-generated videos of child sexual abuse so far this year globally, compared with just two in the first half of 2024. The videos have become smoother and more detailed, the organisation's analysts said, because of improvements in the technology and collaboration among groups on hard-to-reach parts of the Internet called the dark web to produce them. The rise of lifelike videos adds to an explosion of AI-produced child sexual abuse material, or CSAM. In the United States, the National Center for Missing & Exploited Children said it had received 485,000 reports of AI-generated CSAM, including stills and videos, in the first half of the year, compared with 67,000 for all of 2024. 'It's a canary in the coal mine,' said Derek Ray-Hill, interim CEO of the Internet Watch Foundation. The AI-generated content can contain images of real children alongside fake images, he said, adding, 'There is an absolute tsunami we are seeing.' The deluge of AI material threatens to make law enforcement's job even harder. While still a tiny fraction of the total amount of child sexual abuse material found online, which tallied reports in the millions, the police have been inundated with requests to investigate AI-generated images, taking away from their pursuit of those engaging in child abuse. Law enforcement authorities say federal laws against child sexual abuse material and obscenity cover AI-generated images, including content that is wholly created by the technology and do not contain real images of children. Beyond federal statutes, state legislators have also raced to criminalise AI-generated depictions of child sexual abuse, enacting more than three dozen state laws in recent years. But courts are only just beginning to grapple with the legal implications, legal experts said. The new technology stems from generative AI, which exploded onto the scene with OpenAI's introduction of ChatGPT in 2022. Soon after, companies introduced AI image and video generators, prompting law enforcement and child safety groups to warn about safety issues. Much of the new AI content includes real imagery of child sexual abuse that is reused in new videos and still images. Some of the material uses photos of children scraped from school websites and social media. Images are typically shared among users in forums, via messaging on social media and other online platforms. In December 2023, researchers at the Stanford Internet Observatory found hundreds of examples of child sexual abuse material in a dataset used in an early version of the image generator Stable Diffusion. Stability AI, which runs Stable Diffusion, said it was not involved in the data training of the model studied by Stanford. It said an outside company had developed that version before Stability AI took over exclusive development of the image generator. Only in recent months have AI tools become good enough to trick the human eye with an image or video, avoiding some of the previous giveaways like too many fingers on a hand, blurry backgrounds or jerky transitions between video frames. The Internet Watch Foundation found examples last month of individuals in an underground web forum praising the latest technology, where they remarked on how realistic a new cache of AI-generated child sexual abuse videos were. They pointed out how the videos ran smoothly, contained detailed backgrounds with paintings on walls and furniture, and depicted multiple individuals engaged in violent and illegal acts against minors. About 35 tech companies now report AI-generated images of child sexual abuse to the National Center for Missing & Exploited Children, said John Shehan, a senior official with the group, although some are uneven in their approach. The companies filing the most reports typically are more proactive in finding and reporting images of child sexual abuse, he said. Amazon, which offers AI tools via its cloud computing service, reported 380,000 incidents of AI-generated child sexual abuse material in the first half of the year, which it took down. OpenAI reported 75,000 cases. Stability AI reported fewer than 30. Stability AI said it had introduced safeguards to enhance its safety standards and 'is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM.' Amazon and OpenAI, when asked to comment, pointed to reports they posted online that explained their efforts to detect and report child sexual abuse material. Some criminal networks are using AI to create sexually explicit images of minors and then blackmail the children, said a Department of Justice official, who requested anonymity to discuss private investigations. Other children use apps that take images of real people and disrobe them, creating what is known as a deepfake nude. Although sexual abuse images containing real children are clearly illegal, the law is still evolving on materials generated fully by artificial intelligence, some legal scholars said. In March, a Wisconsin man who was accused by the Justice Department of illegally creating, distributing and possessing fully synthetic images of child sexual abuse successfully challenged one of the charges against him on First Amendment grounds. Judge James Peterson of US District Court for the Western District of Wisconsin said that 'the First Amendment generally protects the right to possess obscene material in the home' so long as it isn't 'actual child pornography.' But the trial will move forward on the other charges, which relate to the production and distribution of 13,000 images created with an image generator. The man tried to share images with a minor on Instagram, which reported him, according to federal prosecutors. 'The Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat,' said Matt Galeotti, head of the Justice Department's criminal division. – ©2025 The New York Times Company This article originally appeared in The New York Times.