logo
Police searched a man's laptop for malware. What they found is becoming all too common

Police searched a man's laptop for malware. What they found is becoming all too common

When police searched the computer of 29-year-old IT worker Aaron Pennesi in March, they were looking for the malware he used to steal personal information from his colleagues at The Forest High School on Sydney's northern beaches.
That wasn't all they found. In an all-too-common turn of events, police stumbled upon child sexual abuse material on a laptop seized for another reason. But something was different about this content.
The scenes depicted weren't real.
Instead, Pennesi had used a popular AI-generation website to create the child abuse material using search prompts that are too grotesque to publish.
In an even more severe case, a Melbourne man was sentenced to 13 months in prison in July last year for offences including using an artificial-intelligence program to produce child abuse images.
Police found the man had used an AI image-generation program and inputted text and images to create 793 realistic images.
As cases involving the commercial generation of AI child abuse material that is completely original and sometimes indistinguishable from the real thing become increasingly common, one expert says the phenomenon has opened a 'vortex of doom' in law enforcement's efforts to stamp out the content online.
Naive misconceptions
As the tug of war over the future of AI oscillates in the court of public opinion, one of the more terrifying realities that suggests it could do more harm than good is the ease with which it enables offenders to produce and possess child sexual abuse material.
The widespread adoption of image-generation models has been a boon for paedophiles seeking to access or profit from the content online.
Interpol's immediate past director of cybercrime, Craig Jones, says the use of AI in child sexual abuse material online has 'skyrocketed' in the past 12 to 18 months.
'Anybody is able to use an online tool [to access child sexual abuse content], and with the advent of AI, those tools are a lot stronger. It allows offenders to do more,' Jones said.
The AFP-led Australian Centre to Counter Child Exploitation, or ACCCE, received 63,547 reports of online child exploitation from July 2024 to April 2025. That's a 30 per cent increase on the previous financial year, with two months remaining.
'We're seeing quite a significant increase in what's occurring online,' AFP Acting Commander Ben Moses says, noting that those statistics don't differentiate between synthetic and real child abuse content.
Loading
That's in line with the legal treatment of the issue; possessing or creating the content in either form is punishable under the same offences.
But a common misconception is that AI-generated material shouldn't be taken as seriously or is not as harmful as the traditional type because no child is abused in the creation of the material.
Moses says that while identifying real victims will always be the ACCCE's priority, AI-generated content is being weaponised against real children.
'It can still be very harmful and horrific. [It] can include the ability … to generate abuse in relation to people they know. For those victims, it has significant consequences.'
In 2024, a British man was jailed for 18 years for turning photographs of real children, some younger than 13, into images to sell to other paedophiles online. The sentencing judge called the images 'chilling'.
In another British example, a BBC report in 2024 found evidence that an adults-only VR sex simulator game was being used to create child models for use in explicit sex scenes, and that models had been based on photos taken of real girls in public places.
'The other aspect of it, and what may not be well known, is cases where innocent images of children have been edited to appear sexually explicit, and those photos are then used to blackmail children into providing other intimate content,' Moses says.
Moses says this new 'abhorrent' form of sextortion, and how it opens up new ways for offenders to victimise minors, is of great concern to the ACCCE.
Professor Michael Salter, the director of Childlight UNSW, the Australasian branch of the Global Child Safety Institute, calls the misconception that AI-generated abuse material is less harmful 'really naive'.
'The forensic evidence says that it is a serious risk to children.'
'The emergence of AI has been something of a vortex of doom in the online Child Protection space.'
Professor Michael Salter
Salter says the demand for synthetic material primarily comes from serious offenders and that, generally, they also possess actual child sexual abuse content.
'It's also important to understand that a lot of the material that they're creating is extremely egregious because they can create whatever they want,' he said.
'The sort of material they're creating is extremely violent, it's extremely sadistic, and it can include imagery of actual children they want to abuse.'
Tech-savvy paedophiles
AI child sexual abuse material first crossed Michael Salter's desk around five years ago. In that time, he's witnessed how offenders adapt to new technologies. As AI advanced, so did the opportunities for paedophiles.
He explains that AI was first used to sharpen older material and later to create new images of existing victims. It has now proliferated into offenders training their own engines or using commercially available image-generation sites to create brand-new material.
This can include deepfake videos featuring real people. But Salter says what is more common is still-image generation that is frighteningly readily available.
'We have commercial image generation sites that you can go to right now, and you don't even have to look for child sexual abuse material because the generation of [it] is so popular that these sites often have trending pages, and I've seen sections where the keyword is 'pre-teen', or 'tween', or 'very young'.'
In a 2024 report, the Internet Watch Foundation (IWF) found a 380 per cent increase in reported cases of AI-generated child sexual abuse content online, noting that the material was becoming 'significantly more realistic' and that perpetrators were finding 'more success generating complex 'hardcore' scenarios' involving penetrative sexual activity, bestiality or sadism.
'One user shared an anonymous webpage containing links to fine-tuned models for 128 different named victims of child sexual abuse.'
Internet Watch Foundation's July 2024 AI child sexual abuse material report
The IWF found evidence that AI models that depict known child abuse victims and famous children were being created and shared online. In some of the most perverse cases, this could include the re-victimisation of 'popular' real-life child abuse victims, with AI models allowing perpetrators to generate new images of an abused minor.
Loading
The report acknowledged that the usage of these fine-tuned models, known as LoRAs, was likely to go much deeper than the IWF could assess, thanks to end-to-end encrypted peer-to-peer networks that were essentially inaccessible.
Moreover, Australia's eSafety Commission warns that child sexual abuse material produced by AI is 'highly scalable'.
'[It requires] little effort to reproduce en masse once a model is capable of generating illegal imagery,' a spokesperson said.
Commercial interests
The rapid escalation of the amount of content available online is partially attributed to how AI has enabled the commercialisation of child sexual abuse material.
'Offenders who are quite adept at creating material are essentially taking orders to produce content, and this material is increasingly realistic,' Salter says.
Jones says that in the span of his career, he's seen the provision of child sexual abuse content go from physical photocopies being shared in small groups to it being available online in a couple of clicks.
'Unfortunately, there is a particular audience for child sexual abuse material, and what AI can do is generate that content, so [commercialisation] is serving a demand that is out there.'
In one of the biggest stings involving an AI-child abuse enterprise, Danish police, in conjunction with Europol, uncovered a subscription service that commercialised access to the content. The global operation saw two Australian men charged, and 23 others apprehended around the world.
'There were over 237 subscribers to that one matter,' Moses says of Operation Cumberland. 'When we talk about proliferation and people profiting from this type of activity, this is of great concern to us.'
Swamped by the growing sea of content, officers now face the difficulty of identifying which situations depict real children being abused, as opposed to an AI-generated child who doesn't exist.
'It also means that police have to spend quite a lot of time looking at material to determine whether it's real or not, which is quite a serious trauma risk for police as well,' Salter says.
Moses from the ACCCE agrees that it's 'very difficult work' for officers. 'Whilst it is very confronting material, it doesn't compare to the trauma that child victims endure, and there's very much a focus on identifying victims.'
The influx of AI-generated content has complicated its mission in many ways, Moses says, including by robbing crucial resources from ACCCE's primary goal of rescuing children who are being abused.
'It takes a lot of time to identify real victims, and the concern for us … is that the [AI-generated content] is becoming increasingly harder [to detect], and it takes time away from our people who are trying to identify real victims.'
Law enforcement 'overwhelmed'
While prosecutions for offences involving fake abuse material have increased, the rate hasn't kept up with the pace of the increase in the amount of content found online.
Salter says resourcing is one of the biggest challenges facing law enforcement.
'Law enforcement is so overwhelmed with really egregious online sexual exploitation cases … their primary priority is to prevent and rescue the abuse of actual kids.'
He says it's a struggle he's heard across all jurisdictions.
'They're really struggling in terms of people power, in terms of access to the technology that they need to conduct these investigations and to store that amount of material,' Salter says.
'There needs to be a huge uplift right across the law enforcement space.'
Additionally, AI-generated child sexual abuse content requires a whole reset of the way the content is detected.
Old machine methods of detecting the content online involved scraping for verified abuse content, which means it has to have already been assessed by a human as illegal content to be detected.
'The obvious challenge we see with AI-generated material is that it's all new, and so it's very unlikely, through current detection technologies, that we can proactively screen it,' Salter says.
Unregulated threat let loose
It's a global issue that crosses jurisdictions and exists on the internet's severely under-regulated new frontier. But that hasn't deterred Australia's eSafety commissioner, Julie Inman Grant, from introducing world-first industry standards to hold tech companies to account for the content they platform.
The standards came into force in December 2024 and require storage services such as Apple's iCloud and Google Drive, messaging services, and online marketplaces that offer generative AI models to prevent their products from being misused to store or distribute child sexual abuse material and pro-terror content.
'We have engaged with both AI purveyors and the platforms and libraries that host them to ensure they are aware of their obligations under the standards,' an eSafety commission spokesperson said.
'We believe the standards are a significant step in regulating unlawful and seriously harmful content and align with our broader efforts to ensure that AI tools, such as those used to create deepfakes, are held to the highest safety standards.'
The recent passage of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 also expanded on the available criminal offences relating to non-consensual, sexually explicit AI-generated material.
While international companies can face multimillion-dollar penalties for breaches of the eSafety Commission's standards in Australia, major tech players such as Meta are increasingly adopting end-to-end encryption, which means even the companies themselves can't see what content they're hosting, let alone law enforcement.
Interpol works at the forefront of these issues, often acting as a bridge between authorities and the private sector. Jones observes that while interventions such as Australia's new standards play an important role in setting high standards for tech companies, encryption and other privacy policies make it 'very hard for law enforcement to get those data sets'.
International co-operation is crucial for successfully prosecuting commercial child sexual abuse content cases, and Jones says that in best practice examples, when a global chain is identified, the tech industry is brought in as part of the investigation.
'I'm seeing more of an involvement in the tech sector around supporting law enforcement. But that's sometimes at odds with encryption and things like that,' Jones says.
Loading
'I think the tech industry has a duty of care to the communities that they serve. So I don't think it's good enough to say, 'Oh, well, it's encrypted. We don't know what's there.' '
Salter takes a more pessimistic view of the tech industry's actions, arguing that most companies are moving away from, not towards, proactively monitoring the presence of child sexual abuse content.
'The emergence of AI has been something of a vortex of doom in the online child protection space,' Salter says.
Online child protection efforts were already overwhelmed, he says, before the tech sector 'created a new threat to children' and 'released [it] into the wild with no child protection safeguards'.
'And that's very typical behaviour.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Ukraine covers frontline roads with anti-drone nets
Ukraine covers frontline roads with anti-drone nets

News.com.au

time6 hours ago

  • News.com.au

Ukraine covers frontline roads with anti-drone nets

A ravaged car with its engine destroyed and doors riddled with shrapnel lay on the side of the road near Dobropillia, a sleepy town not far from the front line in eastern Ukraine. Hit by a small, remote-controlled drone, the mangled chassis was a stark reminder of why Ukraine is hurrying to mount netting over supply routes behind the sprawling front line to thwart Russian aerial attacks. As Russia's invasion grinds through its fourth year, Moscow and Kyiv are both menacing each other's armies with swarms of cheap drones, easily found on the market and rigged with deadly explosives. AFP reporters saw Ukrainian soldiers installing green nets on four-metre (13-foot) poles spanning kilometres (miles) of road in the eastern Donetsk region, where some of the war's most intense fighting has taken place. "When a drone hits the net, it short-circuits and it cannot target vehicles," said 27-year-old engineering brigade commander Denis, working under the blazing sun. - Threat from above - "We are shifting into a so-called drone war," Denis told AFP. FPV (first-person view) drones have already seriously wounded a few of his men. Some are armed with shotguns to shoot them down. The Russian army has also been deploying nets. "We weave nets like spiders! For extremely dangerous birds without feathers," the Russian defence ministry quoted a soldier with the call sign "Ares" as saying in April. An earlier article by pro-Kremlin media outlet Izvestia also showed soldiers mounting netting close to the front. - Everyone is in danger - Drones are also a worry for towns and cities. Since early July, the town of Dobropillia, around 20 kilometres (12 miles) from the front line, has become a target for Russian FPV drone attacks. During a recent visit to the civilian hub -- where some 28,000 people lived before the war -- AFP journalists saw residents on the streets rush for cover in shops when a drone began buzzing overhead. When the high-pitched whirring had died down and the threat disappeared, one woman exiting a shelter picked up her shopping bags and glanced upwards, returning to her routine. Every day, victims come to the small town's hospital. According to the hospital's director, Vadym Babkov, the enemy FPVs "spare neither medical workers nor civilians". As the roads "are not yet 100-percent covered" by nets, his ambulances have to take long detours, reducing the patients' chances of survival, the 60-year-old said. "We are all under threat," Babkov added. In Russia's Belgorod border region, which frequently comes under Ukrainian fire, authorities have retrofitted ambulances with metal anti-drone cages -- a technology once reserved for tanks and personnel carrier vehicles. - New habits - "Civilians have got used to it," Denis told AFP. Olga, a waitress in a small cafe and mini-market in Dobropillia, has devised her own way to cope with the constant drone threat. "When I drive and feel that a drone is going to attack me, I open all the windows to avoid glass shards hitting me," the 45-year-old told AFP. The atmosphere in the town had become "frightening", Olga said. The shop next to Olga's was recently hit by an FPV drone, leaving its owner in a coma. "Now we jump at every gust of wind," Olga said. "The day has passed -- thank God. The night has passed and we wake up with all our arms and legs intact -- thank God." Despite the roads constantly coming under attack, Olga still receives products to sell in her small cafe, since suppliers take detours along routes away from the front. But she doesn't know for how long. fv-asy/cad/gil

Man fined after shock plane arrest
Man fined after shock plane arrest

Perth Now

time7 hours ago

  • Perth Now

Man fined after shock plane arrest

A Brisbane jetsetter has escaped with a $1700 fine for using a fake name to board a flight, after an alleged phone conversation about a bomb led to his dramatic arrest. Bernhard Freddy Roduner, 44, was found guilty of using false identification to obtain a flight ticket on Tuesday. He boarded a Brisbane-bound flight at Sydney Airport on January 14, and was allegedly overheard referring to a bomb during a phone conversation before takeoff. He was escorted off the Virgin Australia plane by police and was found to be travelling under a fake name. Bernhard Roduner has denied allegations he travelled under a fake name and made bomb threats on a Brisbane bound flight. Facebook Credit: Supplied Officers searched the aircraft and confirmed there was no threat of a bomb. Mr Roduner earlier told the Daily Mail a woman sitting in a row in front of him may have misinterpreted him telling a colleague 'Tassie is the bomb', as he was returning from a trip in Tasmania visiting friends and regularly used the phrase. Mr Roduner also suggested his vaguely middle eastern appearance may have played a role in the incident. He was charged with one count of taking a constitutional flight with a ticket obtained with false identity information, one count of using false identification information at a constitutional airport and one count of threatening aviation security. The first charge has a maximum $16,500 fine and the third carries a potential one year prison sentence however Mr Roduner was fined only $1000 for the first and $700 for the second. Mr Roduner's earlier charge of threatening aviation staff was dismissed in June. He appeared in Downing Centre Local Court in March to plead not guilty to the offences. Mr Roduner said the fake name accusation was also the product of an unfortunate mix up, claiming a friend in Tasmania had booked his flight home via Sydney and their name was used on the ticket. It comes after Australian Federal Police Detective Superintendent Morgan Blunden said the AFP would not tolerate threats to aviation. 'Travelling on an aircraft should be a safe experience for passengers and airline crew alike,' Mr Blunden said. 'The AFP will continue to work closely with airline partners to deal with any behaviour that interferes with aviation security.'

Federal police charge two French women for allegedly importing 32kg of meth at Brisbane Airport
Federal police charge two French women for allegedly importing 32kg of meth at Brisbane Airport

ABC News

time3 days ago

  • ABC News

Federal police charge two French women for allegedly importing 32kg of meth at Brisbane Airport

Two French women have been charged for allegedly attempting to smuggle more than 30 kilograms of methamphetamine into Australia in their suitcases. The pair, who are aged 19 and 20, were stopped by Australian Border Force (ABF) officers at Brisbane Airport on Tuesday last week after flying from a South-East Asian country, federal police said. The officers allegedly found 32 individually wrapped bricks of white powder in their luggage. Police said the drugs had an estimated street value of $29 million. Both women were charged with one count importing a commercial quantity of a border-controlled substance, which can carry a life sentence. They appeared in the Brisbane Magistrates Court on Wednesday, July 9, and were remanded in custody, police said. They are expected to appear again on September 5. The '"multi-leg route" taken by the pair was similar to another four women arrested last week, two of whom arrived in Brisbane from overseas, police said. Acting ABF Commander Troy Sokoloff said it was "extremely unlikely" the alleged offenders were acting alone. "This is now four young women in less than two weeks detected by ABF officers at Brisbane Airport," he said. "Our officers are alert to these trends and will continue to adapt and respond decisively as we work hand in glove with AFP to disrupt and investigate."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store