logo
Elon Musk's xAI apologizes for Grok chatbot's antisemitic responses

Elon Musk's xAI apologizes for Grok chatbot's antisemitic responses

USA Today8 hours ago
Elon Musk's Grok AI chatbot feature issued an apology after it made several antisemitic posts on the social media site X this week.
In a statement posted to X on July 12, xAI, the artificial intelligence company that makes the chatbot program, apologized for "horrific behavior" on the platform. Users reported receiving responses that praised Hitler, used antisemitic phrases and attacked users with traditionally Jewish surnames.
"We deeply apologize for the horrific behavior that many experienced," the company's statement said. "Our intent for @grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause was an update to a code path upstream of the @grok bot."
The company, founded by Musk in 2023 as a challenger to Microsoft-backed OpenAI and Alphabet's Google, said the update to the program resulted in a deviation in the AI chatbot's behavior. It was operational for 16 hours before it was removed as a result of the reported extremist language.
Users on X shared multiple posts July 8 in which Grok repeated antisemitic stereotypes about Jewish people, among various other antisemitic comments. It's not the first time xAI's chatbot has raised alarm for its responses.
In May, the chatbot mentioned "white genocide" in South Africa in unrelated conversations. At the time, xAI said the incident was the result of an 'unauthorized modification' to its online code.
A day after the alarming posts last week, Musk unveiled a new version of the chatbot, Grok 4, on July 9.
The Tesla billionaire and former adviser to President Donald Trump, said in June he would retrain the AI platform after expressing frustration with the way Grok answered questions. Musk said the tweaks his xAI company had made to Grok made the chatbot too susceptible to being manipulated by users' questions.
'Grok was too compliant to user prompts,' Musk wrote in a post on X after announcing the new version. 'Too eager to please and be manipulated, essentially. That is being addressed.'
Grok 3, which was released in February, is available for free, while the new versions Grok 4 and Grok 4 Heavy, go for $30 and $300 a month, respectively.
Contributing: Jessica Guynn, USA TODAY.
Kathryn Palmer is a national trending news reporter for USA TODAY. You can reach her atkapalmer@usatoday.com and on X @KathrynPlmr.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

It's Time For Agentic L&D
It's Time For Agentic L&D

Forbes

timean hour ago

  • Forbes

It's Time For Agentic L&D

Getty If there's room for Agentic AI in the headlines, there is room for something else these days: Agentic Learning and Development or Agentic L&D for short. Over the years, organizations have allowed the Learning and Development function to become too polite. L&D is overly eager to serve at the edge of business instead of operating at the center. This situation is no longer acceptable in an era dominated by AI. The L&D function doesn't have to get rude, but it should get ready to start swinging. Agentic L&D is a necessary new term for an essential new way of thinking, one that positions Learning and Development not as content delivery, but as a capability sherpa. It sits inside the work, not around it. It influences business outcomes, not just training metrics. It behaves as though it's part of the business strategy, not simply trying to keep up with it. If you've seen the recent job description for Microsoft's Director of AI-Era Skilling Transformation, you've already seen what is coming. The job doesn't mention instructional design. Not at all. It talks about embedding learning into the systems, sprints, and day-to-day moments that define real work. "This role exists to deliver transformative skilling experiences that intrinsically motivate individuals and teams to skill through exploration, demonstration, and real-world experiences," states the posting. In other words, it's calling for Agentic L&D, whether the company uses the term or not. EY's GenAI Academy is already doing it: contextual learning embedded into role-based journeys. Not bolted on nor tucked away in an LMS. It's lived, applied, and then measured. If organizations are serious about preparing teams and people for what's next—and for what's already arrived—then the days of learning as an accessory are over. It's time to stop reporting on completions, taking orders, or just offering content. It's time for L&D to move from passive support to active influence. It's time for Agentic L&D. From Around the Work to Inside It L&D has trained an entire profession to operate like instructional caterers. Stakeholders place the order, L&D delivers a menu of options, and everyone gets their post-course certificate. Meanwhile, the real work—the problems, the decisions, the friction—keeps happening elsewhere. Agentic L&D doesn't hover at the margins. It embeds and operates at the source, at the point of the work. As author Harold Jarche wrote back in 2013, "Work is learning and learning is the work." Agentic L&D takes that idea seriously and operationalizes it at scale. EY's GenAI Talent Academy, as mentioned earlier, pushes in this direction. Their role-based pathways don't require sign-up. They shadow actual workflows, adapting to projects in motion. L&D stops being an interruption (or outpost) and becomes an enabler inside the work. From Content to Context People do not necessarily need more learning content. It's available at the fingertips of your next AI prompt. The digital shelves are now full of content, let alone what already exists in the various learning and content portals. What's missing is context; learning that lands where the friction actually lives. Agentic L&D begins with a different brief. Not "build a course," but "find the pain." Where are employees struggling to act? Where are decisions being delayed or reversed? Where are teams improvising because the current system no longer fits? Where are the skills or role gaps? Microsoft's recently released AI-Era Skilling job brief is clear. Don't upload content; instead, embed and then align to capability. Change how work is accomplished, not just how it's taught. "Champion the shift from episodic learning to continuous, AI-augmented skilling embedded directly into the flow of work through business aligned co creation," also states the job brief. This change means there will be a need to rewrite default behaviors and expectations for L&D practitioners. More in-line decision trees. Fewer generic case studies. More job-embedded prompts and "what do you think" open-ended questions in the flow of work. And yes, that potentially also means fewer formal courses and more short-form AI nudges that help to change and calibrate behavior. Both AI and human L&D performance coaches are going to crush it. Agentic L&D does not try to keep up with the business; instead, it walks beside it, listening, adapting, and building learning directly into the flow of work. While content may be present—and it should—it also starts with context. What is the business context of the performance gap? From Input to Outcome If the L&D function is still reporting on attendance and satisfaction, they have missed the point. It's not about participation; it's more about assisting in human potential transformation, which involves a better understanding not only of what goes into the learning (and the team member) but also of the outcome. Agentic L&D measures capability velocity, which is the time it takes to move from friction to fluency or from confusion to competence. At a minimum, it is about shifting from an in-role skill or task to the next level. According to Continu, organizations using data-driven L&D see retention increase by 46%, productivity by 37%, onboarding acceleration by 34%, and per-employee revenue jump by 29%. These are business outcomes! Ultimately, Agentic L&D ought to be tracking retention uplift, error reduction, skill uptake, competence gap analysis, and even career or role mobility optionality. LinkedIn's 2024 Workplace Learning Report suggests organizations that link learning to talent movement don't just keep people; they grow them. Isn't this the actual point of L&D? What Comes Next The thinking around Agentic L&D is not about a rebrand; it's a full-scale L&D reset. (And I don't have it all figured out yet. I'm still noodling.) Learning isn't solely a calendar of events or content on a platform. It isn't simply a set of course completions with colorful dashboards. It is an embedded 'guide on the side' agent who is woven into the culture, decisions, rituals, systems, and workflows of the organization. It is part AI and part human, but led by the humans. Agentic L&D consults, curates, and activates. It doesn't just take orders. It is the fulcrum of performance change across the organization. If AI has earned the adjective agentic, so should the L&D function. But the function itself needs to be reset—and led—accordingly. Because if L&D does not become 'agents in the business' (or Agentic L&D), the work will move on without it. And this time, it won't come back.

Great, Grok is in cars now too
Great, Grok is in cars now too

Engadget

timean hour ago

  • Engadget

Great, Grok is in cars now too

Just a day after the xAI team issued a comprehensive apology and explanation about why its chatbot was spreading antisemitic rhetoric, Tesla updated its software for its cars to include the supposedly fixed Grok. According to Tesla, all new vehicles delivered on or after July 12 will have Grok available in-car. There's no additional subscription cost, but Tesla is limiting Grok's availability to models in the US for now. For older models to run Grok, it requires a Tesla with an AMD processor, the latest software update of 2025.26, and either a stable Wi-Fi connection or Tesla's $9.99 Premium Connectivity subscription. It's worth noting that Grok will simply be an AI chatbot you can ask questions to, but won't be able to interface with the car itself. In other words, Grok can't help you set up directions to your destination, lower the music's volume or control the car's temperature. Instead, it can offer excruciatingly cringe-inducing responses under its "Unhinged" personality, as seen in an X post from Tesla. While Tesla has incorporated the chatbot into its newly delivered cars, the company still faces safety concerns with its Full Self-Driving system, which uses mostly cameras and AI. Tesla added that Grok may become available to more of its vehicles with over-the-air software updates in the future, but noted that "Grok availability is subject to change or end at any time." Like when Grok went "MechaHitler" only a few days ago and had to be disabled.

How Google Killed OpenAI's $3 Billion Deal Without an Acquisition
How Google Killed OpenAI's $3 Billion Deal Without an Acquisition

Gizmodo

time2 hours ago

  • Gizmodo

How Google Killed OpenAI's $3 Billion Deal Without an Acquisition

Google just dealt OpenAI a major blow by scuttling a potential $3 billion deal, and in doing so, solidified a rising trend in Silicon Valley's AI arms race: the 'non-acquisition acquisition.' Google announced on July 11 that it poached key talent from the rapidly rising AI startup Windsurf, which until then had a reported $3 billion acquisition deal with OpenAI that has now collapsed. Instead, Google is paying $2.4 billion to hire away top Windsurf employees, including the company's CEO, and take a non-exclusive license to its technology, according to Bloomberg. By poaching Windsurf's top brains but not acquiring the startup itself, Google achieved two critical goals at once: it nullified OpenAI's momentum and gained access to the startup's valuable AI technology. Friday's announcement is only the latest instance of what is increasingly becoming the go-to tactic for big tech companies looking to grow their competitive edge. Tech analysts have described it as a 'non-acquisition acquisition,' or more simply, an 'acqui-hire.' OpenAI, the company behind ChatGPT, ignited the current AI frenzy back in 2022 and has been the leader in generative AI ever since. But its market lead is being increasingly challenged by big tech competitors like Google and Meta, and it is now clearer than ever that elite AI engineers are the most valuable currency in this fight for dominance. Recently, OpenAI has found itself a primary target. After a series of high-profile talent raids by Meta, OpenAI executives described the feeling as though 'someone has broken into our home and stolen something,' in an internal memo obtained by WIRED. The biggest aggressor in this new era of 'the poaching wars' has been Meta. In April 2025, CEO Mark Zuckerberg admitted that the company had fallen behind competitors in the AI race. His comments sparked a multi-billion-dollar spending spree marked by strategic talent hires. Meta hired ScaleAI CEO Alexandr Wang, Apple's top AI mind Ruoming Pang, and Nat Friedman, former CEO of Microsoft-owned GitHub, as well as multiple top OpenAI employees tempted by multi-year deals worth millions. The company is gathering this talent under a new group dedicated to developing AI superintelligence called Meta Superintelligence Labs. Similar acqui-hire deals were struck by Microsoft and Amazon last year. Microsoft hired top employees from AI startup Inflection, including co-founder Mustafa Suleyman, who now leads Microsoft's AI division. Amazon hired co-founders and other top talent from the AI agent startup Adept. This isn't Google's first rodeo with acqui-hiring, either. The tech giant inked a similar deal with the startup roughly a year ago, which gave Google a non-exclusive license to its LLM technology and saw its two co-founders join the company. OpenAI Hits the Panic Button Beyond just being a symbol of a new era in the AI arms race, this surge in acqui-hires reveals a new playbook for Big Tech to grow its market dominance while sidestepping antitrust scrutiny. This tactic follows a period of intense regulatory pressure under former Federal Trade Commission (FTC) chairwoman Lina Khan, whose administration cracked down on alleged anti-competitive practices in the AI industry. Both Meta and Google are already under intense scrutiny from the FTC. Meta is awaiting a verdict on an antitrust trial over the FTC's claim that it holds a monopoly over social media. Google, on the other hand, has been dealt numerous antitrust defeats in the past year, accused of having monopolies in both internet search and online advertising. The company is awaiting the final results of a trial that could potentially see it forced to divest from its Chrome browser. Early last year, under Khan's leadership, the Commission also launched an investigation into Microsoft, Amazon, and Google over their investments in AI startups OpenAI and Anthropic. Under this cloud of regulatory pressure, it seems acqui-hiring is proving to be an easy way for Big Tech to get what it wants. The big names gain all the necessary access to the technology and top research talent of AI startups without having to go through the vetting hurdles of a formal acquisition. Going forward, it is now up to the current FTC, under Trump-appointed chairman Andrew Ferguson, to define its stance on this practice. While not seen as the same kind of hardliner against Big Tech as Khan, Ferguson has largely continued to pursue the previous administration's investigations, even as President Trump has entertained Silicon Valley leaders at Mar-a-Lago. How Ferguson's FTC and the Trump administration at large choose to respond, or not, to this new wave of regulatory loopholes will determine the future of American big tech and the AI industry as a whole.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store