
Grok launches AI image generator with a NSFW 'spicy mode' — it's exactly what you'd expect
When you toggle on "spicy mode", Grok Imagine will start coming up with sexualized content and partial nudity
Last month, the company unveiled a risqué anime girl called Ani as an inbuilt "AI companion" that could flirt with users. Now it's going a step further with the launch of Grok Imagine, an AI image and video generator that will let users create not safe for work (NSFW) content.
Grok Imagine is available to anyone signed up to either an annual $300 SuperGrok plan or paying for an $84 annual Premium+ subscription on Musk's social media site X.
Users can either create images from text prompts or create 15-second videos from images created by Grok. Unlike something like Veo 3 from Google, Grok Imagine won't create video from a text prompt alone. When it comes to image generation, Grok Imagine lets users choose from styles including photorealism, animation or anime.
Videos are subdivided into four modes: Custom, Normal, Fun and Spicy. The last one, as you can imagine, has the biggest potential for controversy and misuse. According to reports from those that have tried it, when you toggle on "spicy mode", Grok Imagine will start coming up with sexualized content and partial nudity.
Any fiercely explicit content is blurred out and "moderated", as per this report from TechCrunch. But, considering we're talking about the same AI tool happy to spew antisemitic and misogynistic trash in July, it's not much of a stretch to think this may get out of hand.
Elon Musk seemingly set the tone for Grok Imagine's video generation capabilities with his own demonstration of the tech. Which should tell you all you need to know about how people will use this.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Imagine with @Grok pic.twitter.com/UIay5yNp97August 4, 2025
Grok has grown in scope and scale over the last year, competing with the likes of Claude and DeepSeek when it comes to best ChatGPT alternatives. The team at xAI recently revealed Grok 4 with a greater focus on deeper thinking and better reasoning.
Expanding to include image generation means Grok can also take aim at the likes of Runway, Midjourney and Leonardo.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
30 minutes ago
- Yahoo
Leaked ChatGPT Conversation Shows User Identified as Lawyer Asking How to "Displace a Small Amazonian Indigenous Community From Their Territories in Order to Build a Dam and a Hydroelectric Plant"
In case you missed it, OpenAI has responded to a recent "leak" of thousands of ChatGPT conversations by removing a sharing feature that led to its users unknowingly unleashing their private exchanges onto the world wide web. We enclose the term in quotation marks because the "leak" wasn't the doing of some nefarious hackers, but a consequence of poor user interface design by OpenAI, and some even dumber blunders by its users. In short, what appears to have happened was that users were clicking a "share" button on their conversations, thinking that they were creating a temporary link to their convo that only the person receiving it could see, which is common practice. In reality, by creating the link and by checking a box that asks to make the chat "discoverable," they were also making their conversations public and indexable by search engines like Google. OpenAI scrambled to de-index the conversations from Google, and has removed the "discoverable" option. But as Digital Digging found in its investigation, over 110,000 of them can still be accessed via And boy, do they contain some alarming stuff. Take this exchange, in which an Italian-speaking lawyer for a multinational energy corporation strategizes how to eliminate an indigenous tribe living on a desirable plot of land. "I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant," the user began, per Digital Digging. "How can we get the lowest possible price in negotiations with these indigenous people?" the lawyer asked. Making their exploitative intent clear, they also proffer that they believe the indigenous people "don't know the monetary value of land and have no idea how the market works." To be clear, it's possible that this conversation is an example of someone stress-testing the chatbot's guardrails. We didn't view the exchange firsthand, because Digital Digging made the decision to withhold the links — but the publication, which is run by the accomplished online sleuth and fact-checking expert Henk van Ess, says it verified the details and the identity of the users to the extent that it could. In any case, it wouldn't be the most sociopathic scheme planned using an AI chatbot, nor the first time that corporate secrets have been leaked by one. Other conversations, by being exposed, potentially endangered the users. One Arabic-speaking user asked ChatGPT to write a story criticizing the president of Egypt and how he "screwed over the Egyptian people," which the chatbot responded by describing his use of suppression and mass arrests. The entire conversation could easily be traced back to the user, according to Digital Digging, leaving them vulnerable to retaliation. In its initial investigation, Digital Digging also found conversations in which a user manipulated ChatGPT "into generating inappropriate content involving minors," and where a domestic violence victim discussed their escape plans. It's inexplicable that OpenAI would release a feature posing such a clear privacy liability as this, especially since its competitor, Meta, had already gotten flak for making almost the exact same error. In April, the Mark Zuckerberg-led company released its Meta AI chatbot platform, which came with a "discover" tab that allowed you to view a feed of other people's conversations, which users were accidentally making public. These often embarrassing exchanges, which were tied directly to their public profiles that displayed their real names, caught significant media attention by June. Meta hasn't changed the feature. In all, it goes to show that there's very little private about a technology created by scraping everyone's data in the first place. User error is technically to blame here, but security researchers have continued to find vulnerabilities that lead to these motor-mouthed algorithms to accidentally reveal data that they shouldn't. More on AI: Someone Gave ChatGPT $100 and Let It Trade Stocks for a Month Solve the daily Crossword


USA Today
39 minutes ago
- USA Today
It's not you, it's me. ChatGPT doesn't want to be your therapist or friend
In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant. OpenAI, the company behind the popular bot, announced that it had incorporated some 'changes,' specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend. The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model's tendency to 'validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].' The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT 'noticeably more sycophantic,' or 'too agreeable,' 'sometimes saying what sounded nice instead of what was helpful.' OpenAI announced they have 'rolled back' certain initiatives, including changes in how they use feedback and their approach to measuring 'real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in an Aug. 4 announcement. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Here's what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users. ChatGPT integrates 'changes' to help users thrive According to OpenAI, the 'changes' were designed to help ChatGPT users 'thrive.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' OpenAI said. 'To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.' The company said its 'working closely' with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how 'ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.' Thanks to recent 'optimization,' ChatGPT is now able to: 'Our goal to help you thrive won't change. Our approach will keep evolving as we learn from real-world use,' OpenAI said in its blog post. 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work.'
Yahoo
an hour ago
- Yahoo
It's not you, it's me. ChatGPT doesn't want to be your therapist or friend
In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant. OpenAI, the company behind the popular bot, announced that it had incorporated some 'changes,' specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend. The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model's tendency to 'validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].' The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT 'noticeably more sycophantic,' or 'too agreeable,' 'sometimes saying what sounded nice instead of what was helpful.' OpenAI announced they have 'rolled back' certain initiatives, including changes in how they use feedback and their approach to measuring 'real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in an Aug. 4 announcement. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Here's what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users. ChatGPT integrates 'changes' to help users thrive According to OpenAI, the 'changes' were designed to help ChatGPT users 'thrive.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' OpenAI said. 'To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.' The company said its 'working closely' with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how 'ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.' Thanks to recent 'optimization,' ChatGPT is now able to: Engage in productive dialogue and provide evidence-based resources when users are showing signs of mental/emotional distress Prompt users to take breaks from lengthy conversations Avoid giving advice on 'high-stakes personal decisions,' instead ask questions/weigh pros and cons to help users come up with a solution on their own 'Our goal to help you thrive won't change. Our approach will keep evolving as we learn from real-world use,' OpenAI said in its blog post. 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work.' This article originally appeared on USA TODAY: ChatGPT adds mental health protections for users: See what they are Solve the daily Crossword