logo
#

Latest news with #AlexiosMantzarlis

Tech services are failing to take nudify AI tools offline.
Tech services are failing to take nudify AI tools offline.

The Verge

time14-07-2025

  • Business
  • The Verge

Tech services are failing to take nudify AI tools offline.

Posted Jul 14, 2025 at 1:05 PM UTC Tech services are failing to take nudify AI tools offline. A report about how much money these websites are making found that 62 of the 85 websites it examined had hosting or content delivery services provided by Amazon and Cloudflare. Google's sign-on system was also used on 54 of the websites, alongside several other services and payment systems provided by mainstream tech companies. 'They should have ceased providing any and all services to AI nudifiers when it was clear that their only use case was sexual harassment,' said Indicator co-founder Alexios Mantzarlis. AI 'Nudify' Websites Are Raking in Millions of Dollars [

AI 'Nudify' Websites Are Raking in Millions of Dollars
AI 'Nudify' Websites Are Raking in Millions of Dollars

WIRED

time14-07-2025

  • Business
  • WIRED

AI 'Nudify' Websites Are Raking in Millions of Dollars

Jul 14, 2025 7:00 AM Millions of people are accessing harmful AI 'nudify' websites. New analysis says the sites are making millions and rely on tech from US companies. Animation: WIRED Staff; Getty Images For years, so-called 'nudify' apps and websites have mushroomed online, allowing people to create nonconsensual and abusive images of women and girls, including child sexual abuse material. Despite some lawmakers and tech companies taking steps to limit the harmful services, every month, millions of people are still accessing the websites, and the sites' creators may be making millions of dollars each year, new research suggests. An analysis of 85 nudify and 'undress' websites—which allow people to upload photos and use AI to generate 'nude' pictures of the subjects with just a few clicks—has found that most of the sites rely on tech services from Google, Amazon, and Cloudflare to operate and stay online. The findings, revealed by Indicator, a publication investigating digital deception, say that the websites had a combined average of 18.5 million visitors for each of the past six months and collectively may be making up to $36 million per year. Alexios Mantzarlis, a cofounder of Indicator and an online safety researcher, says the murky nudifier ecosystem has become a 'lucrative business' that 'Silicon Valley's laissez-faire approach to generative AI' has allowed to persist. 'They should have ceased providing any and all services to AI nudifiers when it was clear that their only use case was sexual harassment,' Mantzarlis says of tech companies. It is increasingly becoming illegal to create or share explicit deepfakes. According to the research, Amazon and Cloudflare provide hosting or content delivery services for 62 of the 85 websites, while Google's sign-on system has been used on 54 of the websites. The nudify websites also use a host of other services, such as payment systems, provided by mainstream companies. Amazon Web Services spokesperson Ryan Walsh says AWS has clear terms of service that require customers to follow 'applicable' laws. 'When we receive reports of potential violations of our terms, we act quickly to review and take steps to disable prohibited content,' Walsh says, adding that people can report issues to its safety teams. 'Some of these sites violate our terms, and our teams are taking action to address these violations, as well as working on longer-term solutions,' Google spokesperson Karl Ryan says, pointing out that Google's sign-in system requires developers to agree to its policies that prohibit illegal content and content that harasses others. Cloudflare had not responded to WIRED's request for comment at the time of writing. WIRED is not naming the nudifier websites in this story, as not to provide them with further exposure. Nudify and undress websites and bots have flourished since 2019, after originally spawning from the tools and processes used to create the first explicit 'deepfakes.' Networks of interconnected companies, as Bellingcat has reported, have appeared online offering the technology and making money from the systems. Broadly, the services use AI to transform photos into nonconsensual explicit imagery; they often make money by selling 'credits' or subscriptions that can be used to generate photos. They have been supercharged by the wave of generative AI image generators that have appeared in the past few years. Their output is hugely damaging. Social media photos have been stolen and used to create abusive images; meanwhile, in a new form of cyberbullying and abuse, teenage boys around the world have created images of their classmates. Such intimate image abuse is harrowing for victims, and images can be difficult to scrub from the web. Using various open source tools and data, including website analysis tool Built With, Indicator staff and investigative researcher Santiago Lakatos looked into the infrastructure and systems powering 85 nudifier websites. Content delivery networks, hosting services, domain name companies, and webmaster services are all provided by a mixture of some of the biggest tech companies, plus some smaller businesses. Based on calculations combining subscription costs, estimated customer conversion rates, and web traffic the sites sent to payment providers, the researchers estimate that 18 of the websites made between $2.6 million and $18.4 million in the past six months, which could equate to around $36 million a year. (They note this is likely a conservative estimate, as it doesn't incorporate all the websites and transactions that take place away from the websites, such as those on Telegram.) Recently, whistleblower and leaked data reported on by German media outlet Der Spiegel indicated one prominent website may have a multimillion-dollar budget. Another website has claimed to have made millions. Of the 10 most-visited sites, the research says, the most visitors came from the United States—India, Brazil, Mexico, and Germany make up the rest of the top five countries where people accessed the sites. While search engines direct people to nudify websites, the sites have increasingly received visitors from other online sources. Nudifiers have become so popular that Russian hackers have created fake malware-laced versions. Over the past year, 404 Media has reported one site making sponsored videos with adult entertainers, and the websites have also increasingly used paid affiliate and referral programs. 'Our analysis of the nudifiers' behavior strongly indicates their desire to build and entrench themselves in a niche of the adult industry,' Lakatos says. 'They will likely continue to try to intermingle their operations into the adult content space, a trend that needs to be countered by mainstream tech companies and the adult industry as well.' Many of the problems of tech companies allowing nudify platforms to use their systems are well-known. For years, tech journalists have reported on how the deepfake economy has used mainstream payment services, social media advertisements, search engine exposure, and technology from big companies to operate. Yet little comprehensive action has been taken. 'Since 2019, nudification apps have moved from a handful of low-quality side projects to a cottage industry of professionalized illicit businesses with millions of users,' says Henry Ajder, an expert on AI and deepfakes who first uncovered growth in the nudification ecosystem in 2020. 'Only when businesses like these who facilitate nudification apps' 'perverse customer journey' take targeted action will we start to see meaningful progress in making these apps harder to access and profit from.' There are signs the nudify websites are updating their tactics and approaches to try to avoid any potential crackdowns or evade bans. Last year, WIRED reported on how nudify websites used single sign-on systems from Google, Apple, and Discord to allow people to quickly create accounts. Many of the developer accounts were disabled following the reporting. The Indicator says that on 54 of the 85 websites, however, Google's simple sign-in system is being used, and the website creators have taken steps to evade detection by Google. They would, the report says, use an 'intermediary site' to 'pose as a different URL for the registration.' While tech companies and regulators have taken a glacial approach to tackling abusive deepfakes since they first emerged more than a decade ago, there has been some recent movement. San Francisco's city attorney has sued 16 nonconsensual-image-generation services, Microsoft has identified developers behind celebrity deepfakes, and Meta has filed a lawsuit against a company allegedly behind a nudify app that, Meta says, repeatedly posted ads on its platform. Meanwhile, the controversial Take It Down Act, which US president Donald Trump signed into law in May, has put requirements on tech companies to remove nonconsensual image abuse quickly, and the UK government is making it illegal to create explicit deepfakes. The moves may chip away at some nudifier and undress services, but more comprehensive crackdowns are needed to slow the burgeoning harmful industry. Mantzarlis says that if tech companies are more proactive and stricter in enforcing their policies, nudifiers' ability to flourish will diminish. 'Yes, this stuff will migrate to less regulated corners of the internet—but let it,' Mantzarlis says. 'If websites are harder to discover, access, and use, their audience and revenue will shrink. Unfortunately, this toxic gift of the generative AI era cannot be returned. But it can certainly be drastically reduced in scope.'

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms
Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

Yahoo

time12-06-2025

  • Yahoo

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

Meta has sued the maker of a popular AI 'nudify' app, Crush AI, that reportedly ran thousands of ads across Meta's platforms. In addition to the lawsuit, Meta says it's taking new measures to crack down on other apps like Crush AI. In a lawsuit filed in Hong Kong, Meta alleged Joy Timeline HK, the entity behind Crush AI, attempted to circumvent the company's review process to distribute ads for AI nudify services. Meta said in a blog post that it repeatedly removed ads by the entity for violating its policies, but claims Joy Timeline HK continued to place additional ads anyway. Crush AI, which uses generative AI to make fake, sexually explicit images of real people without their consent, reportedly ran more than 8,000 ads for its 'AI undresser' services on Meta's platform in the first two weeks of 2025, according to the author of the Faked Up newsletter, Alexios Mantzarlis. In a January report, Mantzarlis claimed that Crush AI's websites received roughly 90% of their traffic from either Facebook or Instagram, and that he flagged several of these websites to Meta. Crush AI reportedly evaded Meta's ad review processes by setting up dozens of advertiser accounts and frequently changed domain names. Many of Crush AI's advertiser accounts, according to Mantzarlis, were named 'Eraser Annyone's Clothes' followed by different numbers. At one point, Crush AI even had a Facebook page promoting its service. Facebook and Instagram are hardly the only platforms dealing with such challenges. As social media companies like X and Meta race to add generative AI to their apps, they've also struggled to moderate how AI tools can make their platforms unsafe for users, particularly minors. Researchers have found that links to AI undressing apps soared in 2024 on platforms like X and Reddit, and on YouTube, millions of people were reportedly served ads for such apps. In response to this growing problem, Meta and TikTok have banned keyword searches for AI nudify apps, but getting these services off their platforms entirely has proven challenging. In a blog post, Meta said it has developed new technology to specifically identify ads for AI nudify or undressing services 'even when the ads themselves don't include nudity.' The company said it is now using matching technology to help find and remove copycat ads more quickly, and has expanded the list of terms, phrases and emoji that are flagged by its systems. Meta said it is also applying the tactics it has traditionally used to disrupt networks of bad actors to these new networks of accounts running ads for AI nudify services. Since the start of 2025, Meta said, it has disrupted four separate networks promoting these services. Outside of its apps, the company said it will begin sharing information about AI nudify apps through Tech Coalition's Lantern program, a collective effort between Google, Meta, Snap and other companies to prevent child sexual exploitation online. Meta says it has provided more than 3,800 unique URLs with this network since March. On the legislative front, Meta said it would 'continue to support legislation that empowers parents to oversee and approve their teens' app downloads.' The company previously supported the US Take It Down Act, and said it's now working with lawmakers to implement it. Sign in to access your portfolio

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms
Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

TechCrunch

time12-06-2025

  • TechCrunch

Meta sues AI ‘nudify' app Crush AI for advertising on its platforms

Meta has sued the maker of a popular AI 'nudify' app, Crush AI, that reportedly ran thousands of ads across Meta's platforms. In addition to the lawsuit, Meta says it's taking new measures to crack down on other apps like Crush AI. In a lawsuit filed in Hong Kong, Meta alleged Joy Timeline HK, the entity behind Crush AI, attempted to circumvent the company's review process to distribute ads for AI nudify services. Meta said in a blog post that it repeatedly removed ads by the entity for violating its policies, but claims Joy Timeline HK continued to place additional ads anyway. Crush AI, which uses generative AI to make fake, sexually explicit images of real people without their consent, reportedly ran more than 8,000 ads for its 'AI undresser' services on Meta's platform in the first two weeks of 2025, according to the author of the Faked Up newsletter, Alexios Mantzarlis. In a January report, Mantzarlis claimed that Crush AI's websites received roughly 90% of their traffic from either Facebook or Instagram, and that he flagged several of these websites to Meta. Crush AI reportedly evaded Meta's ad review processes by setting up dozens of advertiser accounts and frequently changed domain names. Many of Crush AI's advertiser accounts, according to Mantzarlis, were named 'Eraser Annyone's Clothes' followed by different numbers. At one point, Crush AI even had a Facebook page promoting its service. Facebook and Instagram are hardly the only platforms dealing with such challenges. As social media companies like X and Meta race to add generative AI to their apps, they've also struggled to moderate how AI tools can make their platforms unsafe for users, particularly minors. Researchers have found that links to AI undressing apps soared in 2024 on platforms like X and Reddit, and on YouTube, millions of people were reportedly served ads for such apps. In response to this growing problem, Meta and TikTok have banned keyword searches for AI nudify apps, but getting these services off their platforms entirely has proven challenging. In a blog post, Meta said it has developed new technology to specifically identify ads for AI nudify or undressing services 'even when the ads themselves don't include nudity.' The company said it is now using matching technology to help find and remove copycat ads more quickly, and has expanded the list of terms, phrases and emoji that are flagged by its systems. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Meta said it is also applying the tactics it has traditionally used to disrupt networks of bad actors to these new networks of accounts running ads for AI nudify services. Since the start of 2025, Meta said, it has disrupted four separate networks promoting these services. Outside of its apps, the company said it will begin sharing information about AI nudify apps through Tech Coalition's Lantern program, a collective effort between Google, Meta, Snap and other companies to prevent child sexual exploitation online. Meta says it has provided more than 3,800 unique URLs with this network since March. On the legislative front, Meta said it would 'continue to support legislation that empowers parents to oversee and approve their teens' app downloads.' The company previously supported the US Take It Down Act, and said it's now working with lawmakers to implement it.

Meta is cracking down on AI 'nudify' apps
Meta is cracking down on AI 'nudify' apps

Engadget

time12-06-2025

  • Business
  • Engadget

Meta is cracking down on AI 'nudify' apps

Meta is finally cracking down on "nudify" apps that use AI to generate nonconsensual nude and explicit images of celebrities, influencers and others. The company is suing one app maker that's frequently advertised such apps on Facebook and Instagram, and taking new steps to prevent ads for similar services. The crackdown comes months after several researchers and journalists have raised the alarm about such apps. A recent report from CBS News identified at least "hundreds" of ads on Meta's platform promoting apps that allow users to "remove clothing" from images of celebrities and others. One app in particular, called Crush AI, has apparently been a prolific advertiser on Facebook and Instagram. Researcher Alexios Mantzarlis, Director of Cornell Tech's Security, Trust and Safety Initiative, reported back in January that Crush AI had run more than 8,000 ads on Facebook and Instagram since last fall. Now, Meta says it has filed a lawsuit against Joy Timeline HK Limited, the Hong Kong-based company behind Crush AI and other nudify apps. "This follows multiple attempts by Joy Timeline HK Limited to circumvent Meta's ad review process and continue placing these ads, after they were repeatedly removed for breaking our rules," the company wrote in a blog post. Joy Timeline HK Limited didn't immediately respond to a request for comment. Meta also says it's taking new steps to prevent apps like these from advertising on its platform. "We've developed new technology specifically designed to identify these types of ads — even when the ads themselves don't include nudity — and use matching technology to help us find and remove copycat ads more quickly," Meta wrote. "We've worked with external experts and our own specialist teams to expand the list of safety-related terms, phrases and emojis that our systems are trained to detect within these ads." The social network says it also plans to work with other tech platforms, including app store owners, to share relevant details about entities that abuse its platform. Nudify apps aren't the only entities that have exploited Meta's advertising platform to run ads featuring celebrity deepfakes. Meta has also struggled to contain shady advertisers that use AI-manipulated video of public figures to promote scams . The company's independent Oversight Board, which weighs in on content moderation issues affecting Facebook and Instagram, recently criticized Meta for under-enforcing its rules prohibiting such ads.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store