logo
Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

WIRED11-04-2025
Apr 11, 2025 6:30 AM Some misconfigured AI chatbots are pushing people's chats to the open web—revealing sexual prompts and conversations that include descriptions of child sexual abuse. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES
Several AI chatbots designed for fantasy and sexual role-playing conversations are leaking user prompts to the web in almost real time, new research seen by WIRED shows. Some of the leaked data shows people creating conversations detailing child sexual abuse, according to the research.
Conversations with generative AI chatbots are near instantaneous—you type a prompt and the AI responds. If the systems are configured improperly, however, this can lead to chats being exposed. In March, researchers at the security firm UpGuard discovered around 400 exposed AI systems while scanning the web looking for misconfigurations. Of these, 117 IP addresses are leaking prompts. The vast majority of these appeared to be test setups, while others contained generic prompts relating to educational quizzes or nonsensitive information, says Greg Pollock, director of research and insights at UpGuard. 'There were a handful that stood out as very different from the others,' Pollock says.
Three of these were running-role playing scenarios where people can talk to a variety of predefined AI 'characters'—for instance, one personality called Neva is described as a 21-year-old woman who lives in a college dorm room with three other women and is 'shy and often looks sad.' Two of the role-playing setups were overtly sexual. 'It's basically all being used for some sort of sexually explicit role play,' Pollock says of the exposed prompts. 'Some of the scenarios involve sex with children.'
Over a period of 24 hours, UpGuard collected prompts exposed by the AI systems to analyze the data and try to pin down the source of the leak. Pollock says the company collected new data every minute, amassing around 1,000 leaked prompts, including those in English, Russia, French, German, and Spanish.
It was not possible to identify which websites or services are leaking the data, Pollock says, adding it is likely from small instances of AI models being used, possibly by individuals rather than companies. No usernames or personal information of people sending prompts were included in the data, Pollock says.
Across the 952 messages gathered by UpGuard—likely just a glimpse of how the models are being used—there were 108 narratives or role-play scenarios, UpGuard's research says. Five of these scenarios involved children, Pollock adds, including those as young as 7.
'LLMs are being used to mass-produce and then lower the barrier to entry to interacting with fantasies of child sexual abuse,' Pollock says. 'There's clearly absolutely no regulation happening for this, and it seems to be a huge mismatch between the realities of how this technology is being used very actively and what the regulation would be targeted at.'
WIRED last week reported that a South Korea–based image generator was being used to create AI-generated child abuse and exposed thousands of images in an open database. The company behind the website shut the generator down after being approached by WIRED. Child-protection groups around the world say AI-generated child sexual abuse material, which is illegal in many countries, is growing quickly and making it harder to do their jobs. The UK's anti-child-abuse charity has also called for new laws against generative AI chatbots that 'simulate the offence of sexual communication with a child.'
All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called llama.cpp. This software allows people to relatively easily deploy open source AI models on their own systems or servers. However, if it is not set up properly, it can inadvertently expose prompts that are being sent. As companies and organizations of all sizes deploy AI, properly configuring the systems and infrastructure being used is crucial to prevent leaks.
Rapid improvements to generative AI over the past three years have led to an explosion in AI companions and systems that appear more 'human.' For instance, Meta has experimented with AI characters that people can chat with on WhatsApp, Instagram, and Messenger. Generally, companion websites and apps allow people to have free-flowing conversations with AI characters—portraying characters with customizable personalities or as public figures such as celebrities.
People have found friendship and support from their conversations with AI—and not all of them encourage romantic or sexual scenarios. Perhaps unsurprisingly, though, people have fallen in love with their AI characters, and dozens of AI girlfriend and boyfriend services have popped up in recent years.
Claire Boine, a postdoctoral research fellow at the Washington University School of Law and affiliate of the Cordell Institute, says millions of people, including adults and adolescents, are using general AI companion apps. 'We do know that many people develop some emotional bond with the chatbots,' says Boine, who has published research on the subject. 'People being emotionally bonded with their AI companions, for instance, make them more likely to disclose personal or intimate information.'
However, Boine says, there is often a power imbalance in becoming emotionally attached to an AI created by a corporate entity. 'Sometimes people engage with those chats in the first place to develop that type of relationship,' Boine says. 'But then I feel like once they've developed it, they can't really opt out that easily.'
As the AI companion industry has grown, some of these services lack content moderation and other controls. Character AI, which is backed by Google, is being sued after a teenager from Florida died by suicide after allegedly becoming obsessed with one of its chatbots. (Character AI has increased its safety tools over time.) Separately, users of the generative AI tool Replika were upended when the company made changes to its personalities.
Aside from individual companions, there are also role-playing and fantasy companion services—each with thousands of personas people can speak with—that place the user as a character in a scenario. Some of these can be highly sexualized and provide NSFW chats. They can use anime characters, some of which appear young, with some sites claiming they allow 'uncensored' conversations.
'We stress test these things and continue to be very surprised by what these platforms are allowed to say and do with seemingly no regulation or limitation,' says Adam Dodge, the founder of Endtab (Ending Technology-Enabled Abuse). 'This is not even remotely on people's radar yet.' Dodge says these technologies are opening up a new era of online pornography, which can in turn introduce new societal problems as the technology continues to mature and improve. 'Passive users are now active participants with unprecedented control over the digital bodies and likenesses of women and girls,' he says of some sites.
While UpGuard's Pollock could not directly connect the leaked data from the role-playing chats to a single website, he did see signs that indicated character names or scenarios could have been uploaded to multiple companion websites that allow user input. Data seen by WIRED shows that the scenarios and characters in the leaked prompts are hundreds of words long, detailed, and complex.
'This is a never-ending, text-based role-play conversation between Josh and the described characters,' one of the system prompts says. It adds that all the characters are over 18 and that, in addition to 'Josh,' there are two sisters who live next door to the character. The characters' personalities, bodies, and sexual preferences are described in the prompt. The characters should 'react naturally based on their personality, relationships, and the scene' while providing 'engaging responses' and 'maintain a slow-burn approach during intimate moments,' the prompt says.
'When you go to those sites, there are hundreds of thousands of these characters, most of which involve pretty intense sexual situations,' Pollock says, adding the text based communication mimics online and messaging group chats. 'You can write whatever sexual scenarios you want, but this is truly a new thing where you have the appearance of interacting with them in almost exactly the same way you interact with a lot of people.' In other words, they're designed to be engaging and to encourage more conversation.
That can lead to situations where people may overshare and create risks. 'If people are disclosing things they've never told anyone to these platforms and it leaks, that is the Everest of privacy violations,' Dodge says. 'That's an order of magnitude we've never seen before and would make really good leverage to sextort someone.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

BigBear.ai Stock Hits a New Street-High Price Target
BigBear.ai Stock Hits a New Street-High Price Target

Business Insider

time41 minutes ago

  • Business Insider

BigBear.ai Stock Hits a New Street-High Price Target

With AI continuing to reshape the defense and security landscape, (NYSE:BBAI) stock has been on a tear in 2025, soaring 74%. A steady stream of new customers, major contracts, and surging interest in AI-powered solutions have all combined to fuel the rally. Moreover, with a $384.9 million backlog as of Q1 and mounting traction in areas such as border security, defense, intelligence, and critical infrastructure, BBAI is well positioned for continued growth. Don't Miss TipRanks' Half-Year Sale Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Make smarter investment decisions with TipRanks' Smart Investor Picks, delivered to your inbox every week. That momentum has been further reinforced by a series of recent milestones. In just the past few weeks, announced several high-profile initiatives, including the deployment of its biometric software for Enhanced Passenger Processing (EPP) at major international airports and ports of entry, a strategic partnership in the UAE, and a collaboration with Analogic, a leader in imaging, detection, and automation solutions for the security sector. Alongside these operational wins, improvements to the company's balance sheet have strengthened its financial footing, potentially paving the way for accretive acquisitions that could accelerate revenue growth and shorten the path to sustained profitability. Taken together, these factors significantly bolster the bull case for says H.C. Wainwright analyst Scott Buck. 'This positive momentum gives us greater confidence in our 2025 and 2026 revenue expectations and warrants an expansion in our valuation multiple to 12.0x from 10.0x,' Buck opined. 'Notably, this is still well below AI peer Palantir Technologies, Inc. which currently trades at more than 66.0x 2026 Street revenue estimates, suggesting considerable room for further multiple expansion.' Building on these encouraging trends, Buck believes the expected acceleration in revenue heading into 2026 is likely to push shares higher. The analyst anticipates revenue growth will pick up in 2H25, rising to 7.3% compared to 3.9% in the first half, and then increasing to double-digit growth in 2026, with his forecast calling for 2026 revenue growth of 13.6%. This acceleration should help investors to 'extrapolate a path towards more consistent profitability' – potentially as early as 2026, according to Buck. This is reflected in his model, which projects a positive adjusted EBITDA of $4.5 million next year. With this robust backdrop, Buck has now raised his price target from $6 to a new Street-high of $9, suggesting the stock will gain 16% in the months ahead. Buck's rating remains a Buy. (To watch Buck's track record, click here) Buck is by some measure the Street's most prominent BBAI bull; one other analyst joins him in the bull camp and with 2 additional Holds, the stock claims a Moderate Buy consensus rating. However, going by the $5.83 average price target, a year from now, shares will be going for a ~25% discount. (See BBAI stock forecast) To find good ideas for stocks trading at attractive valuations, visit TipRanks' Best Stocks to Buy, a tool that unites all of TipRanks' equity insights.

Imagen Network (IMAGE) to Raise $420 Million for Growth Using Circle's USDC and Ripple's RLUSD Stablecoins
Imagen Network (IMAGE) to Raise $420 Million for Growth Using Circle's USDC and Ripple's RLUSD Stablecoins

Associated Press

time41 minutes ago

  • Associated Press

Imagen Network (IMAGE) to Raise $420 Million for Growth Using Circle's USDC and Ripple's RLUSD Stablecoins

Subtitle: Multichain capital raise will fund AI development, ecosystem expansion, and decentralized infrastructure at scale. Singapore, Singapore--(Newsfile Corp. - July 7, 2025) - Imagen Network, the first AI-powered decentralized social platform, has announced plans to raise $420 million in capital using stablecoins USDC and RLUSD. The funds will be used to accelerate the development of Imagen's AI engine, expand its multichain infrastructure, and grow creator-focused tools and communities across Ethereum, BNB Chain, and Solana. [ This image cannot be displayed. Please visit the source: ] Scaling decentralized innovation through stable, secure ecosystem growth. To view an enhanced version of this graphic, please visit: This strategic raise leverages Circle's USDC and Ripple Labs' RLUSD to ensure transparent, compliant, and stable contributions while offering global access to institutional and individual supporters. The move highlights Imagen's focus on building secure, scalable social infrastructure while maintaining liquidity across multiple chains. Funds will be allocated to infrastructure upgrades, smart moderation tools, AI-powered content engines, creator onboarding, and decentralized identity modules. A significant portion will also back ecosystem incentives, node deployment, and liquidity for the $IMAGE token's utility across decentralized applications. By combining stablecoin-backed financing with advanced AI social systems, Imagen Network is setting a new benchmark for transparency, funding accessibility, and Web3-native innovation—empowering users to truly own their content, identity, and engagement. About Imagen Network Imagen Network is a decentralized social platform that blends AI content generation with blockchain infrastructure to give users creative control and data ownership. Through tools like adaptive filters and tokenized engagement, Imagen fosters a new paradigm of secure, expressive, and community-driven networking. Media Contact Dorothy Marley KaJ Labs +1 707-622-6168 [email protected] Social Media Twitter Instagram To view the source version of this press release, please visit

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store