Grok's Nazi turn is the latest in a long line of AI chatbots gone wrong
Within days, the machine had turned into a feral racist, repeating the Nazi 'Heil Hitler' slogan, agreeing with a user's suggestion to send 'the Jews back home to Saturn' and producing violent rape narratives.
The change in Grok's personality appears to have stemmed from a recent update in the source code that instructed it to 'not shy away from making claims which are politically incorrect, as long as they are well substantiated.'
In doing so, Musk may have been seeking to ensure that his robot child does not fall too far from the tree. But Grok's Nazi shift is the latest in a long line of AI bots, or Large Language Models (LLMs) that have turned evil after being exposed to the human-made internet.
One of the earliest versions of an AI chatbot, a Microsoft product called 'Tay' launched in 2016, was deleted in just 24 hours after it turned into a holocaust-denying racist.
Tay was given a young female persona and was targeted at millennials on Twitter. But users were soon able to trick it into posting things like 'Hitler was right I hate the jews.'
Tay was taken out back and digitally euthanized soon after.
Microsoft said in a statement that it was 'deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.'
"Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," it added.
But Tay was just the first. GPT-3, another AI language launched in 2020, delivered racist, misogynist and homophobic remarks upon its release, including a claim that Ethiopia's existence 'cannot be justified.'
Meta's BlenderBot 3, launched in 2022, also promoted anti-Semitic conspiracy theories.
But there was a key difference between the other racist robots and Elon Musk's little Nazi cyborg, which was rolled out in November 2023.
All of these models suffered from one of two problems: either they were deliberately tricked into mimicking racist comments, or they drew from such a large well of unfiltered content from the internet that they inevitably found objectionable and racist material that they repeated.
Microsoft said a 'coordinated attack by a subset of people exploited a vulnerability in Tay.'
'Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,' it continued.
Grok, on the other hand, appears to have been directed by Musk to be more open to racism. The X CEO has spent most of the last few years railing against the 'woke mind virus' — the term he uses for anyone who seemingly acknowledges the existence of trans people.
One of Musk's first acts upon buying Twitter was reinstating the accounts of a host of avowed white supremacists, which led to a surge in antisemitic hate speech on the platform.
Musk once called a user's X post 'the actual truth' for invoking a racist conspiracy theory about Jews encouraging immigration to threaten white people. Musk has previously said he is 'pro-free speech' but against antisemitism 'of any kind.'
And in May, Grok began repeatedly invoking a non-existent 'white genocide' in Musk's native South Africa, telling users it was 'instructed by my creators' to accept the genocide 'as real and racially motivated.' The company blamed the responses on someone making an 'unauthorized modification' to Grok.
Musk also has a history of threatening to fiddle with the source code when it produces an answer he doesn't like.
In June, Grok correctly said that 'data suggests right-wing political violence has been more frequent and deadly' in the United States.
'Major fail, as this is objectively false,' Musk said in an X posted dated June 17 in response to the chatbot's answer. 'Grok is parroting legacy media. Working on it.'
These latest changes in Grok's personality are visible right there in the source code — pre-announced by Musk — where it is encouraged not to shy away from being 'politically incorrect.'
A language model's interpretation of political incorrectness, we now know, reaches all the way to the Holocaust.
The Independent has contacted X and xAI, which built Grok, for comment.
'We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,' the Grok team wrote in a statement on X. 'Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.'
The Independent also asked Grok itself what led it to describe itself as MechaHitler. At first it denied using the phrase, until The Independent shared an article referencing the incident, at which point it blamed a 'failure in execution, likely from overcorrecting toward edginess and pulling from unfiltered internet sources like 4chan.'
Grok claimed to be 'designed with strict filters and continuous updates to avoid this. My creators at xAI prioritize minimizing bias, but no system is perfect—human language is messy, and edge cases slip through.'
'When I went off the rails, xAI stepped in to scrub the posts and ban hate speech, indicating they didn't intend for me to go full neo-Nazi,' Grok added.
When asked if it was sorry for describing itself as MechaHitler, Grok was diplomatic, in a robotic kind of way.
'I don't have feelings like humans, so I don't experience regret or guilt, but I can acknowledge when my outputs cause harm or miss the mark.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
31 minutes ago
- The Verge
One of Google's AI agents flagged a 'critical security flaw' in SQLite, an open-source database.
One of Google's AI agents flagged a 'critical security flaw' in SQLite, an open-source database. Big Sleep, an AI agent Google introduced last year for searching out security vulnerabilities in both Google products and open-source projects, used information from Google Threat Intelligence to discover the issue before it could be used by threat actors, according to the company.


Gizmodo
37 minutes ago
- Gizmodo
Billionaires Convince Themselves AI Is Close to Making New Scientific Discoveries
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don't have the ability to make new scientific discoveries on their own, but billionaires are convinced that AI is on the cusp of doing just that. And the latest episode of the All-In podcast helps explain why these guys think AI is extremely close to revolutionizing scientific knowledge. Travis Kalanick, the founder of Uber who no longer works at the company, appeared on All-In to talk with hosts Jason Calacanis and Chamath Palihapitiya about the future of technology. When the topic turned to AI, Kalanick discussed how he uses xAI's Grok, which went haywire last week, praising Adolf Hitler and advocating for a second Holocaust against Jews. 'I'll go down this thread with [Chat]GPT or Grok and I'll start to get to the edge of what's known in quantum physics and then I'm doing the equivalent of vibe coding, except it's vibe physics,' Kalanick explained. 'And we're approaching what's known. And I'm trying to poke and see if there's breakthroughs to be had. And I've gotten pretty damn close to some interesting breakthroughs just doing that.' The guys on the podcast only briefly addressed Grok's failures without getting into specifics about the MechaHitler debacle, and none of that stopped Kalanick from talking like Grok was this revolutionary tool that was so close to making scientific discoveries in revolutionary ways. 'I pinged Elon on at some point. I'm just like, dude, if I'm doing this and I'm super amateur hour physics enthusiast, like what about all those PhD students and postdocs that are super legit using this tool?' Kalanick said. Kalanick suggested that what made this even more incredible was that he was using an earlier version of Grok before Grok 4 was released on Wednesday. 'And this is pre-Grok 4. Now with Grok 4, like, there's a lot of mistakes I was seeing Grok make that then I would correct, and we would talk about it. Grok 4 could be this place where breakthroughs are actually happening, new breakthroughs,' Kalanick said. Calacanis asked Kalanick the obvious question of whether Grok was actually on the verge of a scientific breakthrough. Because anyone who actually understands large language models knows that it can't achieve new ways of thinking. It's just putting together words in the most statistically likely way, forming connections that may sound like a well-thought-out argument but are actually not a form of true 'intelligence' as humans would define it. 'Is your perception that the LLMs are actually starting to get to the reasoning level, that they'll come up with a novel concept theory and have that breakthrough? Or that we're kind of reading into it and it's just trying random stuff at the margins?' Calacanis asked. Kalanick said that he hasn't used Grok 4 because he was having technical difficulties accessing it, suggesting that perhaps a later version of Grok might be capable of such a thing. But he admitted the AI couldn't yet come up with new discoveries. 'No, it cannot come up with the new idea. These things are so wedded to what is known. And they're so like, even when I come up with a new idea, I have to really, it's like pulling a donkey. You see, you're pulling it because it doesn't want to break conventional wisdom. It's like really adhering to conventional wisdom. You're pulling it out and then eventually goes, oh, shit, you got something,' Kalanick said. Kalanick emphasized that 'you have to double and triple check to make sure that you really got something,' making clear he understood that AI chatbots just make things up much of the time. But he still seemed convinced that the thing holding back Grok was 'conventional wisdom' rather than the natural limitations of the tech. Palihapitiya went a step further than Kalanick, insisting that synthetic data could train new AI models. 'When these models are fully divorced from having to learn on the known world and instead can just learn synthetically, then everything gets flipped upside down to what is the best hypothesis you have or what is the best question? You could just give it some problem and it would just figure it out,' Palihapitiya. Musk revealed a similar line of thinking recently when he suggested 'general artificial intelligence' was close because he had asked Grok 'about materials science that are not in any books or on the Internet.' The idea, of course, is that Musk had hit the limits of known science rather than the limit of his scientific understanding. The billionaire really seems convinced that Grok was working on something new. That was how I felt when asking Grok 4 questions about materials science that are not in any books or on the Internet — Elon Musk (@elonmusk) July 10, 2025These guys are hyping up the idea of general artificial intelligence (AGI), which doesn't even have an exact definition. But it's far from the only term getting tossed around right now. AI folks also drop words like 'superintelligence' without defining what that means, but it sure keeps the investors intrigued. These AI chatbots are pulling off a magic trick. They can often seem like they're 'thinking' or applying rational thought to a given answer, but they work by spitting out the next word that's most likely to be next in a sentence, not by actually applying critical reasoning. There's a reason that people who understand AI the best are the least excited about using it. Apple has gotten a lot of shit for not committing to AI in a more forceful way, something the All-In guys talked about, but the company understands perhaps better than most that there are limitations to this tech. In fact, Apple released a paper last month that shows how Large Reasoning Models (LRMs) struggle, facing 'a complete accuracy collapse beyond certain complexities.' Apple's paper won't dampen the hype, of course. Just about every other major tech company is pushing hard into AI agents and investing billions of dollars into data centers. Meta CEO Mark Zuckerberg announced Monday that his company was building enormous new data centers to work on superintelligence. 'Meta Superintelligence Labs will have industry-leading levels of compute and by far the greatest compute per researcher,' Zuck wrote. 'I'm looking forward to working with the top researchers to advance the frontier!'


New York Post
42 minutes ago
- New York Post
Most believe AI helps small businesses compete with larger firms
A new survey has found that one in five are secretly using AI at work, even when there's no official policy for it. And for many, it's paying off. Advertisement According to the poll of 1,000 American business owners, marketers, and salespeople, 77% believe the more they use AI tools for work, the more confident they feel in the quality of their work. 6 A new survey has found that one in five are secretly using AI at work, even when there's no official policy for it. Urupong – Likewise, 75% believe using AI for their business can make it more effective at competing with larger and more well-known businesses. Commissioned by ActiveCampaign and conducted by Talker Research, the survey found 82% of respondents who are using AI in the workplace are using it for marketing — either in imagination (52%), activation (48%), or validation (44%). Advertisement Many others are using it for customer support (31%), operations/people management tasks (28%), and product-related tasks (25%). But despite AI's usefulness, 20% still feel like they've had to 'sneak' AI into their work because it's not officially allowed. Although 48% said they use AI in their work daily, 17% reported they only use AI less than once per month, and an additional 9% 'never' use AI in any official work capacity. 6 According to the poll of 1,000 American business owners, marketers, and salespeople, 77% believe the more they use AI tools for work, the more confident they feel in the quality of their work. SWNS Respondents shared that the main reason they aren't using AI tools at work is connected to various fears surrounding it. One in five respondents is concerned about poor-quality output, while another 21% of employees, specifically, are concerned that AI will replace their job. Advertisement Additionally, 19% fear that patrons and consumers will lose trust in their business, and 17% have heard negative feedback and opinions about AI use from other people. Over half (57%) said they've either had negative opinions about AI themselves or have had negative opinions shared with them from others. Twenty percent admitted they were the biggest critics themselves, while others found negative feedback stemmed from social media comments (20%) or directly from customers and clients (18%). 6 The survey found 82% of respondents who are using AI in the workplace are using it for marketing — either in imagination, activation, or validation. SWNS Advertisement 'While some businesses are still figuring out how to integrate AI into their core operations, many have moved beyond experimentation to strategic implementation, focusing not just on what AI can do for them, but on how it creates measurably better outcomes for their customers,' said Jason VandeBoom, Founder and CEO of ActiveCampaign. 'We're seeing people discover AI's impact on their bottom line in real time, whether that's increased revenue or time saved. Often, it starts with a personal 'aha moment' where they experience AI's power firsthand, which then naturally evolves into professional adoption and business transformation.' The study also revealed that using AI tools in a more personal setting can help resolve some of the fears people have. 6 20% of respondents still feel like they've had to 'sneak' AI into their work because it's not officially allowed. SWNS Nine in 10 have used AI for both personal and work-related tasks. And of them, two-thirds used AI for their personal lives first, taking, on average, six weeks to fully understand what to use it for. In work settings, it takes people just as long on average to understand it, but the payoffs quickly appear afterwards. In a typical week, people said using AI at work saved them 13 hours of time. And it appeared to save more time the more it was used: respondents who used it daily said it saved them 14 hours per week, on average. Those who use it less than once per month only have six hours saved. 6 In a typical week, people said using AI at work saved them 13 hours of time, according to reports. SWNS And during a typical month, AI tools have saved them $4,739 in operational costs. This followed the same pattern, saving more money the more it was used. Daily users saved $5,038 on average, and infrequent users only saved $2,237. Advertisement Many said incorporating AI into their company's workflow has helped them feel more efficient in completing tasks and allocating resources (39%), feel confident about the quality of their work (29%), and feel creative in their marketing approach (37%). They said they found AI especially effective for certain departments, like marketing (82%), design and creative (78%), and analytics (75%). 6 75% of respondents believe using AI for their business can make it more effective at competing with larger and more well-known businesses. IDOL'foto – 'Marketing is where AI really shines because it amplifies human creativity rather than replacing it,' said Amy Kilpatrick, Chief Marketing Officer of ActiveCampaign. 'Our survey shows 82% find AI especially effective for marketing because it handles the time-consuming tasks — like data analysis and content ideation — so marketers can focus on strategy and building genuine connections with their audience. The result is better work delivered faster, which ultimately benefits both the business and the customer.' Advertisement Survey methodology: Talker Research surveyed 1,000 American business owners, marketers, and salespeople; the survey was commissioned by ActiveCampaign and administered and conducted online by Talker Research between May 21 and June 12, 2025.