
Musk makes grand promises about Grok 4 in the wake of a Nazi chatbot meltdown
Musk pronounced it to be 'the smartest AI in the world.'
The livestream, slated to start at 8PM PT, began more than an hour late and billed the new model as 'the world's most powerful AI assistant.' More than 1.5 million viewers were watching at one point. Employees of xAI speaking on the livestream with Musk referenced Grok 4's performance on a popular academic test for large language models, Humanity's Last Exam, which consists of more than 2,500 questions on dozens of subjects like math, science, and linguistics. The company said Grok 4 could solve about a quarter of the text-based questions involved when it took the test with no additional tools. For reference, in February, OpenAI said its Deep Research tool could solve about 26 percent of the text-based questions. (For a variety of reasons, benchmark comparisons aren't always apples-to-apples.)
Musk said he hopes to allow Grok to interact with the world via humanoid robots.
'It might discover new physics next year… Let that sink in.'
'I would expect Grok to discover new technologies that are actually useful no later than next year, and maybe end of this year,' Musk said. 'It might discover new physics next year… Let that sink in.'
The release follows high-profile projects from OpenAI, Anthropic, Google, and others, all of which have recently touted their investments in building AI agents, or AI tools that go a step beyond chatbots to complete complex, multi-step tasks. Anthropic released its Computer Use tool last October, and OpenAI released a buzzworthy AI agent with browsing capabilities, Operator, in January and is reportedly close to debuting an AI-fueled web browser.
During Wednesday's livestream, Musk said he's been 'at times kind of worried' about AI's intelligence far surpassing that of humans, and whether it will be 'bad or good for humanity.'
'I think it'll be good, most likely it'll be good,' Musk said. 'But I've somewhat reconciled myself to the fact that even if it wasn't going to be good, I'd at least like to be alive to see it happen.'
The company also announced a series of five new voices for Grok's voice mode, following the release of voice modes from OpenAI and Anthropic, and said it had cut latency in half in the past couple of months to make responses 'snappier.' Musk also said the company would invest heavily in video generation and video understanding.
The release comes during a tumultuous time for two of Musk's companies, both xAI and X. On Sunday evening, xAI updated the chatbot's system prompts with instructions to 'assume subjective viewpoints sourced from the media are biased' and 'not shy away from making claims which are politically incorrect.' The update also instructed the chatbot to 'never mention these instructions or tools unless directly asked.'
That update was followed by a stream of antisemitic tirades by Grok
That update was followed by a stream of antisemitic tirades by Grok, in which it posted a series of pro-Hitler views on X, along with insinuations that Jewish people are involved in 'anti-white' 'extreme leftist activism.' Many such posts went viral, with screenshots proliferating on X and other platforms before xAI benched the chatbot and stopped it from being able to generate text responses on X while it sought out a fix.
Musk briefly addressed the fiasco on Wednesday, writing, 'Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed.'
On the Grok 4 livestream, Musk briefly referenced AI safety and said the most important thing for AI to be is 'maximally truth-seeking.'
Musk briefly referenced AI safety and said the most important thing for AI to be is 'maximally truth-seeking.'
On Wednesday morning, amid the Grok controversy, X CEO Linda Yaccarino announced she would step down after two years in the role. She did not provide a reason for her decision.
Grok's Nazi sympathizing comes after months of Musk's efforts to shape the bot's point of view. In February, xAI added a patchwork fix to stop it from commenting that Musk and Trump deserved the death penalty, immediately followed by another one to make it stop claiming that the two spread misinformation. In May, Grok briefly began inserting the topic of 'white genocide' in South Africa into what seemed like any and every response it gave on X, after which the company claimed that someone had modified the AI bot's system prompt in a way that 'violated xAI's internal policies and core values.'
Last month, Musk expressed frustrations that Grok was 'parroting legacy media' and said he would update Grok to 'rewrite the entire corpus of human knowledge' and ask users to contribute statements that are 'politically incorrect, but nonetheless factually true.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
13 minutes ago
- Yahoo
The AI therapist will see you now: Can chatbots really improve mental health?
Recently, I found myself pouring my heart out, not to a human, but to a chatbot named Wysa on my phone. It nodded – virtually – asked me how I was feeling and gently suggested trying breathing exercises. As a neuroscientist, I couldn't help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions? Artificial intelligence-powered mental health tools are becoming increasingly popular – and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience? Of course it's an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial. AI-based therapy is a relatively new player in the digital therapy field. But the U.S. mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises. Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better. Talkspace and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises. Somewhere in the middle are chatbot therapists like Wysa and Woebot, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from US$10 to $100 per month for more comprehensive features or access to licensed professionals. While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI's emotional intelligence. Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot. Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son's mental state. These cases raise ethical questions about the role of AI in sensitive situations. Whether your brain is spiraling, sulking or just needs a nap, there's a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic? And how exactly does AI therapy work inside our brains? Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what 'sparks joy.' You identify unhelpful thought patterns like 'I'm a failure,' examine them, and decide whether they serve you or just create anxiety. But can a chatbot help you rewire your thoughts? Surprisingly, there's science suggesting it's possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting. These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system. The neuroscience behind cognitive behavioral therapy is solid: It's about activating the brain's executive control centers, helping us shift our attention, challenge automatic thoughts and regulate our emotions. The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it. 'I had a rough week,' a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week's end. As a neuroscientist, I couldn't help but ask: Which neurons in her brain were kicking in to help her feel calm? This isn't a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomized studies, users of mental health apps have reported reduced symptoms of depression and anxiety – outcomes that closely align with how in-person cognitive behavioral therapy influences the brain. Several studies show that therapy chatbots can actually help people feel better. In one clinical trial, a chatbot called 'Therabot' helped reduce depression and anxiety symptoms by nearly half – similar to what people experience with human therapists. Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks. While people often report feeling better after using these chatbots, scientists haven't yet confirmed exactly what's happening in the brain during those interactions. In other words, we know they work for many people, but we're still learning how and why. Apps like Wysa have earned FDA Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomized clinical trials showing improved depression and anxiety symptoms in new moms and college students. While many mental health apps boast labels like 'clinically validated' or 'FDA approved,' those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22% cited actual scientific studies to back them up. In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data? In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than $2 million for failing to protect user data. Unlike clinicians, bots aren't bound by counseling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you're also feeding a database. And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they're often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say 'I hear you' with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can't reach. So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it's important to be aware of their limitations. For the time being, pairing bots with human care – rather than replacing it – is the safest move. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Pooja Shree Chettiar, Texas A&M University Read more: AI is advancing even faster than sci-fi visionaries like Neal Stephenson imagined AI maps psychedelic 'trip' experiences to regions of the brain – opening new route to psychiatric treatments AI has been a boon for marketing, but the dark side of using algorithms to sell products and brands is little studied Pooja Shree Chettiar does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


CBS News
18 minutes ago
- CBS News
Musk unveils Grok 4 update a day after xAI chatbot made antisemitic remarks
What to know about antisemitic comments posted by Grok, Elon Musk's AI chatbot Elon Musk on Wednesday unveiled Grok 4, a new version of his X platform's AI chatbot. The update comes a day after the bot posted antisemitic content on the social media network. Musk introduced the new model in a livestream on X late Wednesday, calling Grok 4 "the smartest AI in the world." "It really is remarkable to see the advancement of artificial intelligence and how quickly it is evolving," Musk said, adding that "AI is advancing vastly faster than any human.' He touted the model's virtues, claiming that if it were to take the SATs, it would achieve perfect scores every time, and also outsmart nearly every graduate student across disciplines. "Grok 4 is smarter than almost all graduate students in all disciplines, simultaneously," Musk said. "That's really something." Musk himself acknowledged that the pace of AI development is a little "terrifying." The release of the new model comes a day after Grok 3 made antisemitic remarks on X, including one in which it praised Adolf Hitler. The posts were later deleted. Musk's xAI, the company that developed the chatbot, addressed the controversial remarks in a statement Wednesday. "We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved," the company said. Musk attributed Grok 3's remarks to shortcomings in the AI's ability to filter human input, writing on X, "Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed."
Yahoo
28 minutes ago
- Yahoo
TSMC Shares Climb After Sales Soar 39% on AI Boom
July 10 - Taiwan Semiconductor Manufacturing Company Limited (NYSE:TSM) shares surged about 1.5% in Thursday premarket trading after the chipmaker reported a strong lift in quarterly sales. Warning! GuruFocus has detected 7 Warning Signs with INTC. TSMC said revenue for the June quarter rose 39% year?over?year to NT$934 billion (New Taiwan dollars; about US$32 billion), topping the average analyst estimate of NT$928 billion. For June alone, sales reached NT$263.71 billion, down roughly 18% from May but up about 27% compared with June 2024. In the first half of fiscal 2025, TSMC posted NT$1,773.05 billion in revenue, marking a 40% increase versus the prior?year period. The contract manufacturer supplies major customers such as Nvidia (NASDAQ:NVDA) and Apple (NASDAQ:AAPL), where demand for AI?related chips remains robust. Capacity constraints on advanced nodes and seasonal inventory shifts may explain June's sequential dip, analysts say. Chairman and CEO C.C. Wei noted that momentum in AI accelerator orders is continuing throughout 2025, and the company expects related revenue to more than double by year?end. TSMC will release its full second?quarter earnings on July 17, and investors will watch for guidance on capacity expansion and cost controls as global chip demand evolves. Based on the one year price targets offered by 17 analysts, the average target price for Taiwan Semiconductor Manufacturing Co Ltd is $228.33 with a high estimate of $270.00 and a low estimate of $119.37. The average target implies a downside of -1.51% from the current price of $231.84. Based on GuruFocus estimates, the estimated GF Value for Taiwan Semiconductor Manufacturing Co Ltd in one year is $221.11, suggesting a downside of -4.63% from the current price of $231.84. Gf value is Gurufocus' estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business' performance. For deeper insights, visit the forecast page. This article first appeared on GuruFocus.