Latest news with #ACCCE

Sydney Morning Herald
09-06-2025
- Sydney Morning Herald
Police searched a man's laptop for malware. What they found is becoming all too common
When police searched the computer of 29-year-old IT worker Aaron Pennesi in March, they were looking for the malware he used to steal personal information from his colleagues at The Forest High School on Sydney's northern beaches. That wasn't all they found. In an all-too-common turn of events, police stumbled upon child sexual abuse material on a laptop seized for another reason. But something was different about this content. The scenes depicted weren't real. Instead, Pennesi had used a popular AI-generation website to create the child abuse material using search prompts that are too grotesque to publish. In an even more severe case, a Melbourne man was sentenced to 13 months in prison in July last year for offences including using an artificial-intelligence program to produce child abuse images. Police found the man had used an AI image-generation program and inputted text and images to create 793 realistic images. As cases involving the commercial generation of AI child abuse material that is completely original and sometimes indistinguishable from the real thing become increasingly common, one expert says the phenomenon has opened a 'vortex of doom' in law enforcement's efforts to stamp out the content online. Naive misconceptions As the tug of war over the future of AI oscillates in the court of public opinion, one of the more terrifying realities that suggests it could do more harm than good is the ease with which it enables offenders to produce and possess child sexual abuse material. The widespread adoption of image-generation models has been a boon for paedophiles seeking to access or profit from the content online. Interpol's immediate past director of cybercrime, Craig Jones, says the use of AI in child sexual abuse material online has 'skyrocketed' in the past 12 to 18 months. 'Anybody is able to use an online tool [to access child sexual abuse content], and with the advent of AI, those tools are a lot stronger. It allows offenders to do more,' Jones said. The AFP-led Australian Centre to Counter Child Exploitation, or ACCCE, received 63,547 reports of online child exploitation from July 2024 to April 2025. That's a 30 per cent increase on the previous financial year, with two months remaining. 'We're seeing quite a significant increase in what's occurring online,' AFP Acting Commander Ben Moses says, noting that those statistics don't differentiate between synthetic and real child abuse content. Loading That's in line with the legal treatment of the issue; possessing or creating the content in either form is punishable under the same offences. But a common misconception is that AI-generated material shouldn't be taken as seriously or is not as harmful as the traditional type because no child is abused in the creation of the material. Moses says that while identifying real victims will always be the ACCCE's priority, AI-generated content is being weaponised against real children. 'It can still be very harmful and horrific. [It] can include the ability … to generate abuse in relation to people they know. For those victims, it has significant consequences.' In 2024, a British man was jailed for 18 years for turning photographs of real children, some younger than 13, into images to sell to other paedophiles online. The sentencing judge called the images 'chilling'. In another British example, a BBC report in 2024 found evidence that an adults-only VR sex simulator game was being used to create child models for use in explicit sex scenes, and that models had been based on photos taken of real girls in public places. 'The other aspect of it, and what may not be well known, is cases where innocent images of children have been edited to appear sexually explicit, and those photos are then used to blackmail children into providing other intimate content,' Moses says. Moses says this new 'abhorrent' form of sextortion, and how it opens up new ways for offenders to victimise minors, is of great concern to the ACCCE. Professor Michael Salter, the director of Childlight UNSW, the Australasian branch of the Global Child Safety Institute, calls the misconception that AI-generated abuse material is less harmful 'really naive'. 'The forensic evidence says that it is a serious risk to children.' 'The emergence of AI has been something of a vortex of doom in the online Child Protection space.' Professor Michael Salter Salter says the demand for synthetic material primarily comes from serious offenders and that, generally, they also possess actual child sexual abuse content. 'It's also important to understand that a lot of the material that they're creating is extremely egregious because they can create whatever they want,' he said. 'The sort of material they're creating is extremely violent, it's extremely sadistic, and it can include imagery of actual children they want to abuse.' Tech-savvy paedophiles AI child sexual abuse material first crossed Michael Salter's desk around five years ago. In that time, he's witnessed how offenders adapt to new technologies. As AI advanced, so did the opportunities for paedophiles. He explains that AI was first used to sharpen older material and later to create new images of existing victims. It has now proliferated into offenders training their own engines or using commercially available image-generation sites to create brand-new material. This can include deepfake videos featuring real people. But Salter says what is more common is still-image generation that is frighteningly readily available. 'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. Loading The report acknowledged that the usage of these fine-tuned models, known as LoRAs, was likely to go much deeper than the IWF could assess, thanks to end-to-end encrypted peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us … is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services such as Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multimillion-dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players such as Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions such as Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International co-operation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there.' ' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'

The Age
08-06-2025
- The Age
Police searched a man's laptop for malware. What they found is becoming all too common
When police searched the computer of 29-year-old IT worker Aaron Pennesi in March, they were looking for the malware he used to steal personal information from his colleagues at The Forest High School on Sydney's northern beaches. That wasn't all they found. In an all-too-common turn of events, police stumbled upon child sexual abuse material on a laptop seized for another reason. But something was different about this content. The scenes depicted weren't real. Instead, Pennesi had used a popular AI-generation website to create the child abuse material using search prompts that are too grotesque to publish. In an even more severe case, a Melbourne man was sentenced to 13 months in prison in July last year for offences including using an artificial-intelligence program to produce child abuse images. Police found the man had used an AI image-generation program and inputted text and images to create 793 realistic images. As cases involving the commercial generation of AI child abuse material that is completely original and sometimes indistinguishable from the real thing become increasingly common, one expert says the phenomenon has opened a 'vortex of doom' in law enforcement's efforts to stamp out the content online. Naive misconceptions As the tug of war over the future of AI oscillates in the court of public opinion, one of the more terrifying realities that suggests it could do more harm than good is the ease with which it enables offenders to produce and possess child sexual abuse material. The widespread adoption of image-generation models has been a boon for paedophiles seeking to access or profit from the content online. Interpol's immediate past director of cybercrime, Craig Jones, says the use of AI in child sexual abuse material online has 'skyrocketed' in the past 12 to 18 months. 'Anybody is able to use an online tool [to access child sexual abuse content], and with the advent of AI, those tools are a lot stronger. It allows offenders to do more,' Jones said. The AFP-led Australian Centre to Counter Child Exploitation, or ACCCE, received 63,547 reports of online child exploitation from July 2024 to April 2025. That's a 30 per cent increase on the previous financial year, with two months remaining. 'We're seeing quite a significant increase in what's occurring online,' AFP Acting Commander Ben Moses says, noting that those statistics don't differentiate between synthetic and real child abuse content. Loading That's in line with the legal treatment of the issue; possessing or creating the content in either form is punishable under the same offences. But a common misconception is that AI-generated material shouldn't be taken as seriously or is not as harmful as the traditional type because no child is abused in the creation of the material. Moses says that while identifying real victims will always be the ACCCE's priority, AI-generated content is being weaponised against real children. 'It can still be very harmful and horrific. [It] can include the ability … to generate abuse in relation to people they know. For those victims, it has significant consequences.' In 2024, a British man was jailed for 18 years for turning photographs of real children, some younger than 13, into images to sell to other paedophiles online. The sentencing judge called the images 'chilling'. In another British example, a BBC report in 2024 found evidence that an adults-only VR sex simulator game was being used to create child models for use in explicit sex scenes, and that models had been based on photos taken of real girls in public places. 'The other aspect of it, and what may not be well known, is cases where innocent images of children have been edited to appear sexually explicit, and those photos are then used to blackmail children into providing other intimate content,' Moses says. Moses says this new 'abhorrent' form of sextortion, and how it opens up new ways for offenders to victimise minors, is of great concern to the ACCCE. Professor Michael Salter, the director of Childlight UNSW, the Australasian branch of the Global Child Safety Institute, calls the misconception that AI-generated abuse material is less harmful 'really naive'. 'The forensic evidence says that it is a serious risk to children.' 'The emergence of AI has been something of a vortex of doom in the online Child Protection space.' Professor Michael Salter Salter says the demand for synthetic material primarily comes from serious offenders and that, generally, they also possess actual child sexual abuse content. 'It's also important to understand that a lot of the material that they're creating is extremely egregious because they can create whatever they want,' he said. 'The sort of material they're creating is extremely violent, it's extremely sadistic, and it can include imagery of actual children they want to abuse.' Tech-savvy paedophiles AI child sexual abuse material first crossed Michael Salter's desk around five years ago. In that time, he's witnessed how offenders adapt to new technologies. As AI advanced, so did the opportunities for paedophiles. He explains that AI was first used to sharpen older material and later to create new images of existing victims. It has now proliferated into offenders training their own engines or using commercially available image-generation sites to create brand-new material. This can include deepfake videos featuring real people. But Salter says what is more common is still-image generation that is frighteningly readily available. 'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.' In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism. 'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.' Internet Watch Foundation's July 2024 AI child sexual abuse material report The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor. Loading The report acknowledged that the usage of these fine-tuned models, known as LoRAs, was likely to go much deeper than the IWF could assess, thanks to end-to-end encrypted peer-to-peer networks that were essentially inaccessible. Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'. '[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said. Commercial interests The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material. 'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says. Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks. 'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.' In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world. 'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.' Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist. 'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says. Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.' The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused. 'It takes a lot of time to identify real victims, and the concern for us … is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.' Law enforcement 'overwhelmed' While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online. Salter says resourcing is one of the biggest challenges facing law enforcement. 'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.' He says it's a struggle he's heard across all jurisdictions. 'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says. 'There needs to be a huge uplift right across the law enforcement space.' Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected. Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected. 'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says. Unregulated threat let loose It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform. The standards came into force in December 2024 and require storage services such as Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content. 'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said. 'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.' The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material. While international companies can face multimillion-dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players such as Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement. Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions such as Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'. International co-operation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation. 'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says. Loading 'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there.' ' Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content. 'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says. Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'. 'And that's very typical behaviour.'


The Advertiser
03-06-2025
- The Advertiser
Blackmail scam aimed at teens leads to arrests by Australian police, FBI
Almost two dozen alleged online sextortion perpetrators have been arrested amid an international probe into the blackmail of teenagers in Australia, the United States and Canada. The US Federal Bureau of Investigation (FBI) partnered with the Australian Federal Police (AFP) and other global agencies to arrest 22 sextortion suspects in Nigeria. Two of the alleged offenders were linked to the suicide of a 16-year-old boy in NSW in 2023, police said. Police believe the boy committed suicide after engaging with the scammers online, who threatened to share personal photos with his family and friends if he did not pay $500. "The network's scheme, which coerced victims into sharing sexually explicit images before threatening to distribute those images unless payment was made, had devastating consequences," police said. More than 20 teen suicides in the US have been linked to sextortion scams since 2021. The joint operation, Operation Artemis, included two AFP investigators deployed in Nigeria to trace online activity, link digital evidence to suspects, and help in the identification of perpetrators and victims. Investigators from the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) also offered expert analysis on data seized by foreign law enforcement, police said. In Australia, the ACCCE received a total of 58,503 reports of online child exploitation, including 1554 sextortion-related reports in the 2023 to 2024 financial year. AFP acting commander Ben Moses said the global operation sent a clear message to scammers targeting children online. "Law enforcement is united and determined to find you - no matter where you hide," he said. "These crimes are calculated and devastating, often pushing vulnerable young people into extreme distress. "Thanks to the coordinated action of our partners, we achieved meaningful results including an immediate and significant reduction in sextortion reports across Australia." Help is available: The AFP-led ThinkUKnow program has developed the online blackmail and sexual extortion response kit aimed at young people, aged 13 to 17, and is available from the ThinkUKnow and ACCCE websites. The ACCCE has also created a dedicated sextortion help page with resources and information on how to report sextortion. Members of the public who have information about people involved in online child sexual exploitation are urged to contact the ACCCE. If you know abuse is happening right now, or a child is at risk, call police immediately on 000. If you, or someone you know, is impacted by child sexual abuse and online exploitation, support services are available. Almost two dozen alleged online sextortion perpetrators have been arrested amid an international probe into the blackmail of teenagers in Australia, the United States and Canada. The US Federal Bureau of Investigation (FBI) partnered with the Australian Federal Police (AFP) and other global agencies to arrest 22 sextortion suspects in Nigeria. Two of the alleged offenders were linked to the suicide of a 16-year-old boy in NSW in 2023, police said. Police believe the boy committed suicide after engaging with the scammers online, who threatened to share personal photos with his family and friends if he did not pay $500. "The network's scheme, which coerced victims into sharing sexually explicit images before threatening to distribute those images unless payment was made, had devastating consequences," police said. More than 20 teen suicides in the US have been linked to sextortion scams since 2021. The joint operation, Operation Artemis, included two AFP investigators deployed in Nigeria to trace online activity, link digital evidence to suspects, and help in the identification of perpetrators and victims. Investigators from the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) also offered expert analysis on data seized by foreign law enforcement, police said. In Australia, the ACCCE received a total of 58,503 reports of online child exploitation, including 1554 sextortion-related reports in the 2023 to 2024 financial year. AFP acting commander Ben Moses said the global operation sent a clear message to scammers targeting children online. "Law enforcement is united and determined to find you - no matter where you hide," he said. "These crimes are calculated and devastating, often pushing vulnerable young people into extreme distress. "Thanks to the coordinated action of our partners, we achieved meaningful results including an immediate and significant reduction in sextortion reports across Australia." Help is available: The AFP-led ThinkUKnow program has developed the online blackmail and sexual extortion response kit aimed at young people, aged 13 to 17, and is available from the ThinkUKnow and ACCCE websites. The ACCCE has also created a dedicated sextortion help page with resources and information on how to report sextortion. Members of the public who have information about people involved in online child sexual exploitation are urged to contact the ACCCE. If you know abuse is happening right now, or a child is at risk, call police immediately on 000. If you, or someone you know, is impacted by child sexual abuse and online exploitation, support services are available. Almost two dozen alleged online sextortion perpetrators have been arrested amid an international probe into the blackmail of teenagers in Australia, the United States and Canada. The US Federal Bureau of Investigation (FBI) partnered with the Australian Federal Police (AFP) and other global agencies to arrest 22 sextortion suspects in Nigeria. Two of the alleged offenders were linked to the suicide of a 16-year-old boy in NSW in 2023, police said. Police believe the boy committed suicide after engaging with the scammers online, who threatened to share personal photos with his family and friends if he did not pay $500. "The network's scheme, which coerced victims into sharing sexually explicit images before threatening to distribute those images unless payment was made, had devastating consequences," police said. More than 20 teen suicides in the US have been linked to sextortion scams since 2021. The joint operation, Operation Artemis, included two AFP investigators deployed in Nigeria to trace online activity, link digital evidence to suspects, and help in the identification of perpetrators and victims. Investigators from the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) also offered expert analysis on data seized by foreign law enforcement, police said. In Australia, the ACCCE received a total of 58,503 reports of online child exploitation, including 1554 sextortion-related reports in the 2023 to 2024 financial year. AFP acting commander Ben Moses said the global operation sent a clear message to scammers targeting children online. "Law enforcement is united and determined to find you - no matter where you hide," he said. "These crimes are calculated and devastating, often pushing vulnerable young people into extreme distress. "Thanks to the coordinated action of our partners, we achieved meaningful results including an immediate and significant reduction in sextortion reports across Australia." Help is available: The AFP-led ThinkUKnow program has developed the online blackmail and sexual extortion response kit aimed at young people, aged 13 to 17, and is available from the ThinkUKnow and ACCCE websites. The ACCCE has also created a dedicated sextortion help page with resources and information on how to report sextortion. Members of the public who have information about people involved in online child sexual exploitation are urged to contact the ACCCE. If you know abuse is happening right now, or a child is at risk, call police immediately on 000. If you, or someone you know, is impacted by child sexual abuse and online exploitation, support services are available. Almost two dozen alleged online sextortion perpetrators have been arrested amid an international probe into the blackmail of teenagers in Australia, the United States and Canada. The US Federal Bureau of Investigation (FBI) partnered with the Australian Federal Police (AFP) and other global agencies to arrest 22 sextortion suspects in Nigeria. Two of the alleged offenders were linked to the suicide of a 16-year-old boy in NSW in 2023, police said. Police believe the boy committed suicide after engaging with the scammers online, who threatened to share personal photos with his family and friends if he did not pay $500. "The network's scheme, which coerced victims into sharing sexually explicit images before threatening to distribute those images unless payment was made, had devastating consequences," police said. More than 20 teen suicides in the US have been linked to sextortion scams since 2021. The joint operation, Operation Artemis, included two AFP investigators deployed in Nigeria to trace online activity, link digital evidence to suspects, and help in the identification of perpetrators and victims. Investigators from the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) also offered expert analysis on data seized by foreign law enforcement, police said. In Australia, the ACCCE received a total of 58,503 reports of online child exploitation, including 1554 sextortion-related reports in the 2023 to 2024 financial year. AFP acting commander Ben Moses said the global operation sent a clear message to scammers targeting children online. "Law enforcement is united and determined to find you - no matter where you hide," he said. "These crimes are calculated and devastating, often pushing vulnerable young people into extreme distress. "Thanks to the coordinated action of our partners, we achieved meaningful results including an immediate and significant reduction in sextortion reports across Australia." Help is available: The AFP-led ThinkUKnow program has developed the online blackmail and sexual extortion response kit aimed at young people, aged 13 to 17, and is available from the ThinkUKnow and ACCCE websites. The ACCCE has also created a dedicated sextortion help page with resources and information on how to report sextortion. Members of the public who have information about people involved in online child sexual exploitation are urged to contact the ACCCE. If you know abuse is happening right now, or a child is at risk, call police immediately on 000. If you, or someone you know, is impacted by child sexual abuse and online exploitation, support services are available.


Canada Standard
02-06-2025
- Canada Standard
Global Sextortion Sting Nabs 22 in Nigeria, Including Suspects Linked to Aussie Teen's Suicide
SYDNEY, NSW, Australia - A major international operation targeting online sextortion has resulted in the arrest of 22 suspects in Nigeria, including two individuals connected to the death of a 16-year-old Australian boy in 2023. The operation, codenamed Artemis , was led by the U.S. Federal Bureau of Investigation (FBI) in collaboration with the Australian Federal Police (AFP), the Royal Canadian Mounted Police, and Nigeria's Economic and Financial Crimes Commission. The joint effort dismantled a criminal network accused of extorting thousands of teenagers worldwide by coercing them into sending explicit images and then demanding money under threats of releasing the material. The consequences of these crimes have been devastating. In the U.S., more than 20 teenage suicides since 2021 have been linked to sextortion. While many victims were in North America, the AFP confirmed the scheme also affected Australian children, with over 1,500 sextortion-related reports made to the Australian Centre to Counter Child Exploitation (ACCCE) in the past financial year. AFP investigators deployed to Nigeria played a crucial role in tracking digital evidence, identifying suspects, and assisting international partners. Following the arrests in early 2023, Australian authorities noted a sharp decline in sextortion reports—though they warn the threat remains. "This operation sends a clear message: law enforcement will find you, no matter where you are," AFP Acting Commander Ben Moses, who heads the ACCCE sai Monday. "These crimes prey on vulnerable young people, pushing them into extreme distress. Thanks to global cooperation, we've seen real results, including a significant drop in cases." Despite the progress, authorities urge vigilance, as offenders continue targeting minors online. The AFP works with state police to support victims, including mental health referrals and efforts to remove harmful content. To combat the issue, the AFP's ThinkUKnow program offers an online safety kit for teens, while the ACCCE provides a dedicated sextortion help page with reporting tools. The public is urged to report any information on child exploitation to the ACCCE or call 000 in emergencies. Support services are available for victims of online abuse.


7NEWS
02-06-2025
- 7NEWS
Police arrest 22 men over sextortion after NSW teen dies in bedroom
Trigger warning: This article contains descriptions of child abuse A dedicated operation targeting sextortion has resulted in the arrest of 22 suspects in Nigeria. Two of those men are allegedly linked to the death of a 16-year-old NSW boy, who was found dead in his bedroom in 2023. The teen boy had believed he was talking to a woman of European background on social media, when he began to receive increasingly sexualised images. He was initially lured into the conversation with non-sexual 'banter', before was started receiving bikini shots. 'As the conversation progressed, there were further images exchanged,' NSW Police detective superintendent Matthew Craft said at the time. The teen was coerced into sending back an image to his perpetrator, which was then used to threaten him — the boy was told to pay $500 or the image would be sent to his friends and family. The boy died of suicide in his bedroom within seven hours of receiving the threatening messages. Police discovered the threats in his phone during the investigation into his death, and described the messages as 'horrific'. 'These crimes are calculated and devastating, often pushing vulnerable young people into extreme distress,' AFP said on Monday. The crime impacts thousands of teens globally. The Australian Centre to Counter Child Exploitation (ACCCE) received 1554 sextortion-related reports in the 2024 financial year alone. More than 20 teenage suicides in the US have been linked to sextortion-related cases since 2021, AFP said. 'While many victims were based in North America, the ripple effects of the offending extended to Australia and other nations,' AFP said. Two AFP officers were deployed to Nigeria as part of Operation Artemis, which is led by the US Federal Bureau of Investigation in partnership with the AFP, Royal Canadian Mounted Police, and Nigeria's Economic and Financial Crimes Commission. The two men linked to the 16-year-old NSW boy's sextortion case were found and arrested in a slum village. They were charged over the sextortion, but not the teen's death. Another 20 Nigerian-nationals men were arrested as part of Operation Artemis. 'Since the successful conclusion of the arrest phase of Operation Artemis in early 2023, the AFP observed an immediate reduction in sextortion-related reports,' AFP said. 'The targeting of Australian children by offenders online remains ongoing, however.' The AFP-led ThinkUKnow program has an online blackmail and sexual extortion response kit aimed at young people, aged 13 to 17, and is available from the ThinkUKnow and ACCCE websites.