logo
Women reveal horror of finding deepfake porn images of themselves on sick site that encouraged men to rape and degrade underage girls as young as six

Women reveal horror of finding deepfake porn images of themselves on sick site that encouraged men to rape and degrade underage girls as young as six

Daily Mail​23-04-2025
Women have laid bare the horror of learning that the man pedaling fake AI-generated pornographic photos of them was a high-school friend.
Dozens of attendees and graduates of the local General Douglas MacArthur High School in Levittown, New York, had gone to the police to reveal that innocent pictures of them on social media were being digitally doctored alongside horrific messages encouraging users to 'rape' and in other ways degrade them.
They were all the more horrified to discover that the perpetrator was a fellow pupil who had grown up alongside the young women, and was at times, even a close friend.
More than 1,300 posts were shared across some 14 different usernames on a sick site that encouraged men to share pictures of themselves masturbating on printed-out images.
In 2023, Patrick Carey, 24, was sentenced to six months in jail, with 10 years probation after admitting to offenses including promotion of a sexual performance by a child, aggravated harassment as a hate crime, stalking and endangering the welfare of a child, the New York Post reported.
And some of the 14 women he was said to have shared content of have now spoken about the trauma of dealing with the harassment years on, on a new Bloomberg podcast titled Levittown, hosted by reporters Olivia Carville and Margi Murphy.
One of the victims, who was identified only as 'Kayla', 24, spoke about the disgust of finding out the photos of her were out there during the 2020 pandemic.
Her father, a policeman was doing a regular Google search of his children when he was shocked to see an apparentl nude photo of his daughter come up.
When he showed Kayla, she was baffled - as the image, in which she was originally dressed, had been manipulated.
A further search on the site saw more altered pictures, taken from her online profiles, with worrying messages alongside, expressing what depraved users wanted to do to her.
'It was "drink her piss", "milk her", "have her drink my piss",' she shared.
'We would see what they posted, like their nude pictures and them … j***ing off and c**ing on our pictures, even like pictures of our pictures with their d*** there and the ejaculation there. And then there was like some like, writings of like, "rape her".'
Soon enough, it became clear that the problem was more widespread, as Kayla was approached by a fellow student at the school known in the podcast as 'Cecilia'.
She too, among several girls that went to General Douglas MacArthur were featured on the site.
Much like had been the case for Kayla, images from their online profiles had been taken to make them seem sexually explicit - including pictures of the women when they were as young as six years old.
In summer 2021, another Levittown local - called 'Kat' in the podcast - recounted one awful experience led her to uncovering Patrick; whose father was a policeman.
She came across a deepfake of herself - originally a sweet snap where she was smiling - made to seem like 'her hands tied behind her back, covered in blood, with a plastic bag over her head'.
In a caption the post - which listed her real name - claimed that 'her body had been found near an abandoned construction site and 'that she'd been raped'.
'I'd had enough. It had to stop. I was like, OK, this is like more serious and I need to know who it is.
It appeared that at this stage rumors were circulating that Patrick was behind the fakes, but Kat - who had known him since she was five - was 'almost defending him in her head'.
'He was smart, so, like, he knew what he was talking about, you know? But anybody who said anything to him, it's just like, OK, I don't care. I'm smarter than anybody…
'That's when I went on the website. I had to look at things that I did not want to look at.
'I spent hours going through it. He would post pictures of himself - not his face, like, his body and his parts.'
In one image posted by the user, he looked to be in a child's bedroom - and Kat decided to inspect the background.
'So I saw stuffed animals and I was like, all right, let me see,' she continued, deciding to compare the furniture with that featured on his little sister's TikTok profile - and horrifically, it matched up.
Cecilia described Patrick as 'very talkative' and 'easy to talk to' when she was in high school.
'When I was a sophomore, I had a lot of classes with him. I had almost all of my day with him.
'And we were friends; we would talk. Until a month or two into the school year, he blocked me on every social media that we had together and stopped talking to me in class.
'And he reached out to me and said: I don't hate you, by the way. He said, I would sooner restrict you from all formats of contacting me. I'm extremely attracted to you, so before that becomes an inevitable problem or upset for me, I might as well stop myself from even trying. Does that explanation suffice for you?
'And I said, I guess it does. And that was that.'
She admitted her soul felt 'broken in half' because of the betrayal of trust from someone she considered a friend.
'You watched me kind of grow up, you know, you spent most of my teenage years with me,' she said.
'So you're telling me that all of that time that you spent with me, watching me kind of become the person that I am, was enjoyment to you because you were watching me turn into rape meat.'
Eventually, the evidence found its way to Nassau County Police Department.
Patrick, however, only got six months in jail - with 10 years probation.
In the end, it was not his campaign of terror that allowed the prosecution to build a case against him - but a real nude photo of one of the victims - 'a real picture that had been taken by an ex-boyfriend, who had then shared it among classmates, including Patrick'.
It was considered not only revenge porn - but because the victims was just 14 - child sexual abuse material.
He pleaded guilty in 2022.
Speaking to the podcast, Detective Timothy Ingram said that while the victims were incredible in making their case, when it came to charges it was more complicated.
'These girls, they did their own investigating and they did excellent work,' he expressed. 'They would make great detectives.
'A lot of my co-workers hadn't seen anything like this, where it was to this level of, you know, him posting everywhere for so long with so many victims.'
More than 40 women in Levittown were reportedly affected - but the site's reach was far broader.
The podcast also spoke to a victim of the same website in New Zealand - where, in a horrifyingly similar twist, the perpetrator was known to the woman.
'I was under the age of 18, so I could not have a Tinder. But some guys that I knew, who were maybe one or two years older than me, sent me screenshots of a Tinder account with all of my photos on it and were asking if it was me,' a woman known as 'Lucy' shared.
'When this person matched with a boy on Tinder, they would send them a Snapchat account - and through that Snapchat account would send like very explicit, nude and sexual content but without a face.'
These were not altered images of Lucy, but rather, were made to appear as if she was exchanging faceless nudes.
'In 2020, I got a very unexpected email from the New Zealand police asking to have a phone call with me,' she recalled.
'And I was so suspicious at this point of online harassment and the ability to fake anything online that my initial response was that this was my harasser pretending to be a police officer trying to get in touch with me.'
However she soon realized a 'wider case' was happening 'revolving around another woman who was being harassed in a similar way'.
'They had reverse engineered some evidence that led them to me.'
Investigators Will Wallace and Doug Nuku found some 12 other victims in the country.
While the perpetrator was skilled at hiding, he was eventually discovered as the suspect was followed around on various sites for months - including Facebook Marketplace, where he posted an electric guitar for sale, which was laid on out on a floral comforter.
The innocuous item matched with a photo he had shared on another pornographic site, and soon enough they tracked down the assailant - Finn Cottam.
'I remember the distinct sensation of time slowing down and then they just said a name. And I don't know what I expected, but they said a name that was very familiar to me. It was the name of a boy that I went to school with when I was 12,' Lucy said.
'I felt such a wave of relief that it was a person. Because I guess when someone lives in anonymity, you think they're a monster. And when you hear a name, they're suddenly just a person.'
In 2024, Cottam was sentenced to seven years in jail. As reported by the New Zealand Herald, when he was tracked down, Cottam was found with 'more than 8000 objectionable images and videos, including child exploitation material, on multiple devices he owned'.
He was imprisoned for what the court described as a 'sustained campaign of sexual terror'.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA
Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA

Daily Mail​

time14 hours ago

  • Daily Mail​

Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA

I am horrified that children are growing up in a world where anyone can take a photo of them and digitally remove their clothes. They are growing up in a world where anyone can download the building blocks to develop an AI tool, which can create naked photos of real people. It will soon be illegal to use these building blocks in this way, but they will remain for sale by some of the biggest technology companies meaning they are still open to be misused. Earlier this year I published research looking at the existence of these apps that use Generative Artificial Intelligence (GenAI) to create fake sexually explicit images through prompts from users. The report exposed the shocking underworld of deepfakes: it highlighted that nearly all deepfakes in circulation are pornographic in nature, and 99% of them feature girls or women – often because the apps are specifically trained to work on female bodies. In the past four years as Children's Commissioner, I have heard from a million children about their lives, their aspirations and their worries. Of all the worrying trends in online activity children have spoken to me about – from seeing hardcore porn on X to cosmetics and vapes being advertised to them through TikTok – the evolution of 'nudifying' apps to become tools that aid in the abuse and exploitation of children is perhaps the most mind-boggling. As one 16-year-old girl asked me: 'Do you know what the purpose of deepfake is? Because I don't see any positives.' Children, especially girls, are growing up fearing that a smartphone might at any point be used as a way of manipulating them. Girls tell me they're taking steps to keep themselves safe online in the same way we have come to expect in real life, like not walking home alone at night. For boys, the risks are different but equally harmful: studies have identified online communities of teenage boys sharing dangerous material are an emerging threat to radicalisation and extremism. The government is rightly taking some welcome steps to limit the dangers of AI. Through its Crime and Policing Bill, it will become illegal to possess, create or distribute AI tools designed to create child sexual abuse material. And the introduction of the Online Safety Act – and new regulations by Ofcom to protect children – marks a moment for optimism that real change is possible. But what children have told me, from their own experiences, is that we must go much further and faster. The way AI apps are developed is shrouded in secrecy. There is no oversight, no testing of whether they can be used for illegal purposes, no consideration of the inadvertent risks to younger users. That must change. Nudifying apps should simply not be allowed to exist. It should not be possible for an app to generate a sexual image of a child, whether or not that was its designed intent. The technology used by these tools to create sexually explicit images is complex. It is designed to distort reality, to fixate and fascinate the user – and it confronts children with concepts they cannot yet understand. I should not have to tell the government to bring in protections for children to stop these building blocks from being arranged in this way. Posts on LinkedIn have even appeared promoting the 'best' nudifying AI tools available I welcome the move to criminalise individuals for creating child sexual abuse image generators but urge the government to move the tools that would allow predators to create sexually explicit deepfake images out of reach altogether. To do this, I have asked the government to require technology companies who provide opensource AI models – the building blocks of AI tools – to test their products for their capacity to be used for illegal and harmful activity. These are all things children have told me they want. They will help stop sexual imagery involving children becoming normalised. And they will make a significant effort in meeting the government's admirable mission to halve violence against women and girls, who are almost exclusively the subjects of these sexual deepfakes. Harms to children online are not inevitable. We cannot shrug our shoulders in defeat and claim it's impossible to remove the risks from evolving technology. We cannot dismiss it this growing online threat as a 'classroom problem' – because evidence from my survey of school and college leaders shows that the vast majority already restrict phone use: 90% of secondary schools and 99.8% of primary schools. Yet, despite those restrictions, in the same survey of around 19,000 school leaders, they told me online safety is among the most pressing issue facing children in their communities. For them, it is children's access to screens in the hours outside of school that worries them the most. Education is only part of the solution. The challenge begins at home. We must not outsource parenting to our schools and teachers. As parents it can feel overwhelming to try and navigate the same technology as our children. How do we enforce boundaries on things that move too quickly for us to follow? But that's exactly what children have told me they want from their parents: limitations, rules and protection from falling down a rabbit hole of scrolling. Two years ago, I brought together teenagers and young adults to ask, if they could turn back the clock, what advice they wished they had been given before owning a phone. Invariably those 16-21-year-olds agreed they had all been given a phone too young. They also told me they wished their parents had talked to them about the things they saw online – not just as a one off, but regularly, openly, and without stigma. Later this year I'll be repeating that piece of work to produce new guidance for parents – because they deserve to feel confident setting boundaries on phone use, even when it's far outside their comfort zone. I want them to feel empowered to make decisions for their own families, whether that's not allowing their child to have an internet-enabled phone too young, enforcing screen-time limits while at home, or insisting on keeping phones downstairs and out of bedrooms overnight. Parents also deserve to be confident that the companies behind the technology on our children's screens are playing their part. Just last month, new regulations by Ofcom came into force, through the Online Safety Act, that will mean tech companies must now to identify and tackle the risks to children on their platforms – or face consequences. This is long overdue, because for too long tech developers have been allowed to turn a blind eye to the risks to young users on their platforms – even as children tell them what they are seeing. If these regulations are to remain effective and fit for the future, they have to keep pace with emerging technology – nothing can be too hard to tackle. The government has the opportunity to bring in AI product testing against illegal and harmful activity in the AI Bill, which I urge the government to introduce in the coming parliamentary session. It will rightly make technology companies responsible for their tools being used for illegal purposes. We owe it to our children, and the generations of children to come, to stop these harms in their tracks. Nudifying apps must never be accepted as just another restriction placed on our children's freedom, or one more risk to their mental wellbeing. They have no value in a society where we value the safety and sanctity of childhood or family life.

Teenage boys using 'nudifying' AI apps to make X-rated images of girls and teachers at school
Teenage boys using 'nudifying' AI apps to make X-rated images of girls and teachers at school

Daily Mail​

time15 hours ago

  • Daily Mail​

Teenage boys using 'nudifying' AI apps to make X-rated images of girls and teachers at school

Experts have warned of 'a massive explosion' in boys – some as young as 13 – using free AI programs to create lifelike fake nude images of fellow pupils. It is claimed that girls have even been driven to suicide after falling victim to the so-called 'nudifying' smartphone apps. The apps can turn fully clothed photos of classmates – as well as teachers – into realistic-looking explicit nude images. They are thought to now be in use in 'every classroom' and teenage pupils have already been convicted of creating and sharing the images. Under current law, creating, possessing and distributing an indecent image of a child are offences which carry substantial prison terms. Marcus Johnstone, a criminal defence lawyer, said that there has been a 'massive explosion' in such crimes by children who are often unaware how serious their actions are. 'I am aware of some perpetrators being 13 but they're mostly 14 and 15,' he said. 'But they are getting younger. 'Even kids at primary schools have knowledge of it and are looking at porn on their phones. It is happening in most, if not all, secondary schools and colleges. 'I expect every classroom will have someone using technology to nudify photographs or create deepfake images. Posts on LinkedIn have even appeared promoting the 'best' nudifying AI tools available 'It has a devastating effect on the girls – who are almost always the victims. It affects their mental health. We have heard stories of suicides.' Children's Commissioner Dame Rachel de Souza called on the Government to 'go much further and faster' to protect children, telling The Mail on Sunday that the apps 'are seriously harmful and that their existence is a scandal'. She added: 'Nudifying apps should simply not be allowed to exist. It should not be possible for an app to generate a sexual image of a child, whether or not that was its designed intent.' Among previous criminal cases, a Midlands boy was given a nine-month referral for making 1,300 indecent images of a child, starting when he was 13. And a 15-year-old in the South East was handed a nine-month referral after making scores of indecent images, also beginning when he was 13. The Crime and Policing Bill 2025 is expected to introduce a new offence of creating sexually explicit so-called 'deepfake' images or films. Tech giant Meta is suing Hong Kong-based firm Joy Timeline amid claims it was behind nudifying apps including free to download CrushAI. It followed reports it had bought thousands of ads on Instagram and Facebook, using multiple fake profiles to evade Meta's moderators. Derek Ray-Hill of the Internet Watch Foundation said: 'This is nude and sexual imagery of real children – often incredibly lifelike – which we see increasingly falling into the hands of online criminals with the very worst intentions.' Latest statistics show there were about 1,400 proven sexual offences involving child defendants in England and Wales in the year to March 2024 – nearly a 50 per cent increase on the previous 12 months.

AI scams awareness campaign launched in North Wales
AI scams awareness campaign launched in North Wales

Leader Live

timea day ago

  • Leader Live

AI scams awareness campaign launched in North Wales

The campaign, led by the charity Get Safe Online in collaboration with North Wales Police and Crime Commissioner (PCC) Andy Dunbobbin, aims to help residents use AI safely and confidently this summer. Get Safe Online is a service commissioned by the PCC's office and the police force to provide digital safety information to the public. Mr Dunbobbin, PCC for North Wales, said: "As Police and Crime Commissioner, fighting cybercrime is one of my key priorities and AI is one of the biggest digital and technological innovations of recent years. "It has the power to transform our lives, often for the better. "But with every innovation, there is always a criminal who will try and use it for their own ends, whether that be through fraud, theft, or deception. "As well as using these new technologies, the important thing is for people to educate themselves about the dangers that might be lurking in the shadows. "As the old saying goes, forewarned is forearmed. "That's why I encourage people to follow this new advice from Get Safe Online and stay safe while using the internet and information technology." AI technology now underpins many everyday tools, from virtual assistants to online shopping and entertainment recommendations. While these systems offer convenience, they also present fresh opportunities for cybercriminals. The campaign warns that AI can be used to create convincing scams and other forms of digital deception. To help the public stay safe online, Get Safe Online is sharing practical advice for identifying and avoiding AI-enabled scams. The organisation recommends: Checking the context: Be wary of unsolicited emails, messages, or phone calls, even if they appear professional. If the message seems urgent or too good to be true, it could be a scam. Inspecting the details: AI-generated content may be grammatically correct but could include subtle errors, such as odd email addresses, incorrect logos, or unusual phrasing. In images and videos, look for signs that something is not quite right. Verifying identity independently: Do not trust a message alone. Use a known, trusted method to contact the person or organisation and confirm their identity. Get Safe Online also offers more general guidance for using AI tools responsibly. The charity recommends using AI as an aid rather than a replacement for critical thinking. Users should review and refine content generated by AI, and confirm information using reliable sources. Personal and financial information should not be entered into AI tools, as there is a risk that it could be exposed to others through generative AI or search platforms. Staying informed about AI developments and new scam techniques is also encouraged. Special Constable Dwain Barnes from North Wales Police's Cybercrime Team said: "Although Generative AI has the potential to improve many aspects of society, it can also be used by criminals to author convincing phishing emails, create disinformation for social media posts or generate deepfake images and videos that look realistic, making them very difficult to spot. "AI can also clone a person's voice from a few seconds of audio. "Scammers can therefore use AI to impersonate trusted individuals and trick people into transferring money or revealing sensitive information, for example. "It is therefore more important than ever to double check information to ensure that it is from a trusted source and if you receive unexpected requests or messages which might seem urgent or emotional, take your time and verify that they are genuine by contacting the sender directly using a verified means of contact, not by replying to the message or calling the number back." Mr Barnes also offered additional security tips for the public. READ MORE: Plans submitted to build new Home Bargains store in Flintshire He said: "To help you stay safe, use strong long passwords using three random words, turn on two-step verification for all your accounts and don't share those codes with anyone else. "Be mindful about what you are posting online, scammers can download your content and use it to create deepfakes, so it's advisable to have strong privacy settings on your social media accounts. "Also consider agreeing on a secret word or phrase with your family or team members, you can then use this to confirm that it really is them if something doesn't feel right. "Let's keep spreading the word on how scammers are using AI, it's important for more people to understand how AI deepfakes work, which will make it harder for scammers to succeed." Further information and digital safety advice can be found at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store