Latest news with #DerekRay-Hill


STV News
5 days ago
- STV News
AI-generated child sex abuse videos 'now as lifelike as real footage'
AI-generated videos of child sexual abuse have skyrocketed in numbers and are now 'indistinguishable' from real footage, a charity has warned. The Internet Watch Foundation (IWF), which finds and helps remove abuse imagery online, said criminals were creating more realistic and more extreme sexual abuse content – and could soon be able to make and share feature-length films of the material. Highly realistic videos of abuse are no longer confined to short, glitch-filled clips that were previously common with the technology, with perpetrators now using AI to produce videos that often include the likenesses of real children on a large scale. Some 1,286 individual AI-generated child sexual abuse videos were discovered in the first half of this year, according to new IWF data published on Friday. Only two such videos were discovered over the same period last year. All of the confirmed videos so far in 2025 have been so convincing that they had to be treated under UK law exactly as if they were genuine footage, the IWF said. More than 1,000 of the videos were assessed as Category A imagery, the most extreme – which can include depictions of rape, sexual torture and bestiality. The data also showed that AI-generated child sexual abuse imagery was discovered on 210 separate webpages in the first half of this year, compared to 42 webpages in 2024, while confirmed reports of the images to the charity had risen by 400%. Each webpage can contain multiple images or videos. The figures come after the IWF previously said 291,273 reports of child sexual abuse imagery were reported last year. The charity has called on the Government to ensure the safe development and use of AI models by introducing binding regulation that ensures the technology's design is unable to be abused. Derek Ray-Hill, interim chief executive of the IWF, said: 'We must do all we can to prevent a flood of synthetic and partially synthetic content joining the already record quantities of child sexual abuse we are battling online. 'I am dismayed to see the technology continues to develop at pace, and that it continues to be abused in new and unsettling ways. 'Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films. 'The children being depicted are often real and recognisable, the harm this material does is real, and the threat it poses threatens to escalate even further.' Mr Ray-Hill said the Government 'must get a grip' on the issue as it was currently 'just too easy' for criminals to produce the videos, and that feature-length AI-generated child sexual abuse films of real children were inevitable. He added: 'The Prime Minister only recently pledged that the Government will ensure tech can create a better future for children. Any delays only set back efforts to safeguard children and deliver on the Government's pledge to halve violence against girls. 'Our analysts tell us nearly all this AI abuse imagery features girls. It is clear this is yet another way girls are being targeted and endangered online.' An anonymous senior analyst at the IWF said AI child sexual abuse imagery creators had video quality that was 'leaps and bounds ahead' of what was available last year. 'The first AI child sexual abuse videos we saw were deepfakes – a known victim's face put onto an actor in an existing adult pornographic video. It wasn't sophisticated but could still be pretty convincing,' he said. 'The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing. 'But now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme as the tools improve in their ability to generate video showing two or more people. 'The videos also include sets showing known victims in new scenarios.' The IWF has advised the public to report images and videos of child sexual abuse to the charity anonymously and only once, including the exact URL where the content is located. Safeguarding minister Jess Phillips said: 'These statistics are utterly horrific. Those who commit these crimes are just as disgusting as those who pose a threat to children in real life. 'AI-generated child sexual abuse material is a serious crime, which is why we have introduced two new laws to crack down on this vile material. 'Soon, perpetrators who own the tools that generate the material or manuals teaching them to manipulate legitimate AI tools will face longer jail sentences and we will continue to work with regulators to protect more children.' An anonymous senior analyst at the IWF said AI child sexual abuse imagery creators had video quality that was 'leaps and bounds ahead' of what was available last year. 'The first AI child sexual abuse videos we saw were deepfakes – a known victim's face put onto an actor in an existing adult pornographic video. It wasn't sophisticated but could still be pretty convincing,' he said. 'The first fully synthetic child sexual abuse video we saw at the beginning of last year was just a series of jerky images put together, nothing convincing. 'But now they have really turned a corner. The quality is alarmingly high, and the categories of offence depicted are becoming more extreme as the tools improve in their ability to generate video showing two or more people. 'The videos also include sets showing known victims in new scenarios.' The IWF has advised the public to report images and videos of child sexual abuse to the charity anonymously and only once, including the exact URL where the content is located. Get all the latest news from around the country Follow STV News Scan the QR code on your mobile device for all the latest news from around the country


The Star
5 days ago
- The Star
AI-generated images of child sexual abuse are flooding the Internet
WASHINGTON: A new flood of child sexual abuse material created by artificial intelligence is hitting a tipping point of realism, threatening to overwhelm authorities. Over the past two years, new AI technologies have made it easier for criminals to create explicit images and videos of children. Now, researchers at organisations including the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning of a surge of new material this year that is nearly indistinguishable from actual abuse. New data released July 10 from the Internet Watch Foundation, a British nonprofit that investigates and collects reports of child sexual abuse imagery, identified 1,286 AI-generated videos of child sexual abuse so far this year globally, compared with just two in the first half of 2024. The videos have become smoother and more detailed, the organisation's analysts said, because of improvements in the technology and collaboration among groups on hard-to-reach parts of the Internet called the dark web to produce them. The rise of lifelike videos adds to an explosion of AI-produced child sexual abuse material, or CSAM. In the United States, the National Center for Missing & Exploited Children said it had received 485,000 reports of AI-generated CSAM, including stills and videos, in the first half of the year, compared with 67,000 for all of 2024. 'It's a canary in the coal mine,' said Derek Ray-Hill, interim CEO of the Internet Watch Foundation. The AI-generated content can contain images of real children alongside fake images, he said, adding, 'There is an absolute tsunami we are seeing.' The deluge of AI material threatens to make law enforcement's job even harder. While still a tiny fraction of the total amount of child sexual abuse material found online, which tallied reports in the millions, the police have been inundated with requests to investigate AI-generated images, taking away from their pursuit of those engaging in child abuse. Law enforcement authorities say federal laws against child sexual abuse material and obscenity cover AI-generated images, including content that is wholly created by the technology and do not contain real images of children. Beyond federal statutes, state legislators have also raced to criminalise AI-generated depictions of child sexual abuse, enacting more than three dozen state laws in recent years. But courts are only just beginning to grapple with the legal implications, legal experts said. The new technology stems from generative AI, which exploded onto the scene with OpenAI's introduction of ChatGPT in 2022. Soon after, companies introduced AI image and video generators, prompting law enforcement and child safety groups to warn about safety issues. Much of the new AI content includes real imagery of child sexual abuse that is reused in new videos and still images. Some of the material uses photos of children scraped from school websites and social media. Images are typically shared among users in forums, via messaging on social media and other online platforms. In December 2023, researchers at the Stanford Internet Observatory found hundreds of examples of child sexual abuse material in a dataset used in an early version of the image generator Stable Diffusion. Stability AI, which runs Stable Diffusion, said it was not involved in the data training of the model studied by Stanford. It said an outside company had developed that version before Stability AI took over exclusive development of the image generator. Only in recent months have AI tools become good enough to trick the human eye with an image or video, avoiding some of the previous giveaways like too many fingers on a hand, blurry backgrounds or jerky transitions between video frames. The Internet Watch Foundation found examples last month of individuals in an underground web forum praising the latest technology, where they remarked on how realistic a new cache of AI-generated child sexual abuse videos were. They pointed out how the videos ran smoothly, contained detailed backgrounds with paintings on walls and furniture, and depicted multiple individuals engaged in violent and illegal acts against minors. About 35 tech companies now report AI-generated images of child sexual abuse to the National Center for Missing & Exploited Children, said John Shehan, a senior official with the group, although some are uneven in their approach. The companies filing the most reports typically are more proactive in finding and reporting images of child sexual abuse, he said. Amazon, which offers AI tools via its cloud computing service, reported 380,000 incidents of AI-generated child sexual abuse material in the first half of the year, which it took down. OpenAI reported 75,000 cases. Stability AI reported fewer than 30. Stability AI said it had introduced safeguards to enhance its safety standards and 'is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM.' Amazon and OpenAI, when asked to comment, pointed to reports they posted online that explained their efforts to detect and report child sexual abuse material. Some criminal networks are using AI to create sexually explicit images of minors and then blackmail the children, said a Department of Justice official, who requested anonymity to discuss private investigations. Other children use apps that take images of real people and disrobe them, creating what is known as a deepfake nude. Although sexual abuse images containing real children are clearly illegal, the law is still evolving on materials generated fully by artificial intelligence, some legal scholars said. In March, a Wisconsin man who was accused by the Justice Department of illegally creating, distributing and possessing fully synthetic images of child sexual abuse successfully challenged one of the charges against him on First Amendment grounds. Judge James Peterson of US District Court for the Western District of Wisconsin said that 'the First Amendment generally protects the right to possess obscene material in the home' so long as it isn't 'actual child pornography.' But the trial will move forward on the other charges, which relate to the production and distribution of 13,000 images created with an image generator. The man tried to share images with a minor on Instagram, which reported him, according to federal prosecutors. 'The Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat,' said Matt Galeotti, head of the Justice Department's criminal division. – ©2025 The New York Times Company This article originally appeared in The New York Times.


Los Angeles Times
6 days ago
- Los Angeles Times
AI-generated child abuse webpages surge 400%, alarming watchdog
Reports of child sexual abuse imagery created using artificial intelligence tools have surged 400% in the first half of 2025 according to new data from the UK-based nonprofit Internet Watch Foundation. The organization, which monitors child sexual abuse material online, recorded 210 webpages containing AI-generated material in the first six months of 2025, up from 42 in the same period the year before, according to a report published this week. On those pages were 1,286 videos, up from just two in 2024. The majority of this content was so realistic it had to be treated under UK law as if it were actual footage, the IWF said. Roughly 78% of the videos—1,006 in total—were classified as 'Category A,' the most severe level, which can include depictions of rape, sexual torture, and bestiality, the IWF said. Most of the videos involved girls and in some cases used the likenesses of real children. The growing prevalence of AI-generated child abuse material has alarmed law enforcement worldwide. As generative AI tools become more accessible and sophisticated, the quality of the pictures and videos are improving, making it harder than ever to detect using traditional techniques. While early videos were short and glitchy, the IWF now sees longer, more realistic productions featuring complex scenes and varied settings. Authorities say the content is often used for harassment and extortion. 'Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films,' said Derek Ray-Hill, interim chief executive of the IWF. 'The children being depicted are often real and recognizable, the harm this material does is real, and the threat it poses threatens to escalate even further.' Law enforcement agencies are starting to take action. In a coordinated operation earlier this year, Europol arrested 25 individuals in connection with distributing such material. More than 250 suspects were identified across 19 countries, Bloomberg IWF called for the UK to develop a regulatory framework to ensure AI models have controls to block the production of this type of material. In February, the UK became the first country to criminalize the creation and distribution of AI tools intended to generate child abuse content. The law bans possession of AI models optimized to produce such material, as well as manuals that instruct offenders on how to do so. In the US, the National Center for Missing & Exploited Children—an IWF counterpart—said it received over 7,000 reports related to AI-generated child sexual abuse content in 2024. While most commercial AI tools include safeguards against generating abusive content, some open-source or custom models lack these protections, making them vulnerable to misuse. Urbano writes for Bloomberg.

Engadget
6 days ago
- Engadget
Reports indicate a massive uptick in AI-generated CSAM throughout the internet
Internet Watch Foundation AI-generated child sexual abuse material (CSAM) has been flooding the internet, according to a report by The New York Times . Researchers at organizations like the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning that this new AI-created CSAM is nearly indistinguishable from the real thing. Let's go over some numbers. The Internet Watch Foundation, a nonprofit that investigates and collects reports of CSAM , has identified 1,286 AI-generated videos so far this year. This is compared with just two videos identified in the first half of 2024. That's an exponential increase. To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. 🔎 Developments in artificial intelligence (AI) come with a range of benefits, including supporting learning and innovation. There is, however, growing concern for how AI can also be misused to create and share child sexual abuse material (CSAM), referred to as AI-CSAM. In… — Internet Watch Foundation (IWF) (@IWFhotline) July 8, 2025 The National Center for Missing & Exploited Children re-affirms those statistics. It told NYT that it has received 485,000 reports of AI-generated CSAM, including still images and videos, in the first half of 2025. This is compared to 67,000 for all of 2024. That's another massive uptick 'It's a canary in the coal mine,' said Derek Ray-Hill, interim chief executive of the Internet Watch Foundation. 'There is an absolute tsunami we are seeing.' This technology is constantly improving, so the videos and images have become more realistic. The Internet Watch Foundation found an internet forum in which users were praising how realistic the new videos were. Reporting suggests that this content is distributed through the dark web, making it harder for law enforcement agencies to identify the offenders. It's worth remembering how AI image generators work. They are trained using real images and videos. The New York Times says that much of this new glut of AI-generated content includes real CSAM that has been repurposed by the algorithm. Some of the material even uses real photos of children scraped from school websites and social media. The issue dates back to the early days of this technology. In 2023, researchers at the Stanford Internet Observatory found hundreds of examples of CSAM in a data set used in an early version of the image generator Stable Diffusion. Stability AI says it has introduced safeguards to improve safety standards and "is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM." That did lead other companies to start reporting AI-generated CSAM to the National Center for Missing & Exploited Children. Amazon reported a whopping 380,000 instances of AI-generated CSAM in the first half of this year, all of which it took down. OpenAI reported 75,000 cases . To view this content, you'll need to update your privacy settings. Please click here and view the "Content and social-media partners" setting to do so. NCMEC Applauds the California State Legislature for Passing AB 1831 and looks forward to it being signed into law. NCMEC supports AB 1831 because it addresses gaps in California's legal remedies for child victims of Generative AI CSAM. We are heartened to see states move… — National Center for Missing & Exploited Children (@NCMEC) September 4, 2024 Courts have been slow to catch up with this tech. The DOJ made its first known arrest last year of a man suspected of possessing and distributing AI-generated CSAM . A UK man recently got 18 months in jail for using AI to generate the foul images, which he sold. 'The Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat,' Matt Galeotti, head of the Justice Department's criminal division, told NYT . It's worth noting that despite the alarming uptick in occurrences, AI-generated content still represents a mere fraction of all CSAM identified by authorities and watchdog organizations. For instance, the Internet Watch Foundation confirmed 291,273 reports of CSAM in 2024 and, as previously noted, just two instances were AI-generated.

Straits Times
6 days ago
- Straits Times
AI-generated child abuse web pages surge 400 per cent, alarming watchdog
Sign up now: Get ST's newsletters delivered to your inbox AI tools are increasingly being used to generate child sexual abuse videos using the likeness of real children. – Reports of child sexual abuse imagery created using artificial intelligence tools have surged 400 per cent in the first half of 2025, according to new data from the Britain-based non-profit Internet Watch Foundation (IWF). The organisation, which monitors child sexual abuse material online, recorded 210 web pages containing AI-generated material in the first six months of 2025, up from 42 in the same period the year before, according to a report published this week. On those pages were 1,286 videos, up from just two in 2024. The majority of this content was so realistic it had to be treated under British law as if it were actual footage, the IWF said. Roughly 78 per cent of the videos – 1,006 in total – were classified as 'Category A', the most severe level, which can include depictions of rape, sexual torture and bestiality, the IWF said. Most of the videos involved girls and, in some cases, used the likenesses of real children. The growing prevalence of AI-generated child abuse material has alarmed law enforcement worldwide. As generative AI tools become more accessible and sophisticated, the quality of the pictures and videos are improving, making it harder than ever to detect using traditional techniques. While early videos were short and glitchy, the IWF now sees longer, more realistic productions featuring complex scenes and varied settings. The authorities say the content is often used for harassment and extortion. 'Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films,' said Mr Derek Ray-Hill, interim chief executive of the IWF. 'The children being depicted are often real and recognisable. The harm this material does is real, and the threat it poses threatens to escalate even further,' he said. Taking action Law enforcement agencies are starting to take action. In a coordinated operation earlier in 2024, Europol arrested 25 individuals in connection with distributing such material. More than 250 suspects were identified across 19 countries, Bloomberg reported. The IWF called for Britain to develop a regulatory framework to ensure AI models have controls to block the production of this type of material. In February, Britain became the first country to criminalise the creation and distribution of AI tools intended to generate child abuse content. The law bans possession of AI models optimised to produce such material, as well as manuals that instruct offenders on how to do so. In the United States, the National Centre for Missing and Exploited Children – an IWF counterpart – said it received more than 7,000 reports related to AI-generated child sexual abuse content in 2024. While most commercial AI tools include safeguards against generating abusive content, some open source or custom models lack these protections, making them vulnerable to misuse. BLOOMBERG