
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Musk was not pleased.
'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model.
'Far too much garbage in any foundation model trained on uncorrected data,' he wrote.
On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th.
The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education.
And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience.
'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team.
A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that.
xAI did not respond to a request for comment.
Concerns about Grok following Musk's views
For months, users have questioned whether Musk has been tipping Grok to reflect his worldview.
In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'.
Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation.
A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies.
As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints.
'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said.
What it would take to re-train Grok
It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst.
But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said.
'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.'
A Grok account on X is displayed on a phone screen.
Jakub Porzycki/NurPhoto/Shutterstock
Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base.
Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process.
Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics.
'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.'
Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.'
Bias in AI
Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data.
'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.'
It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful.
'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.'
Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust.
But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Egypt Independent
19 hours ago
- Egypt Independent
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. Concerns about Grok following Musk's views For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. What it would take to re-train Grok It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' A Grok account on X is displayed on a phone screen. Jakub Porzycki/NurPhoto/Shutterstock Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Bias in AI Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'


Mid East Info
2 days ago
- Mid East Info
Calyptus Launches New AI Hiring Platform To Close the Global Productivity Gap - Middle East Business News and Information
Calyptus, the hiring platform known for verifying and placing high-performance talent, has expanded beyond blockchain to help companies globally hire professionals skilled in AI and automation. The platform now supports over 120,000 candidates , 85 employers , and has delivered 125 placements to teams like Circle, Aave, dYdX, Injective and others. Having worked with blockchain teams where 20-30 people were building and managing multi-billion-dollar protocols, the Calyptus co-founders, Callum Crombie & Daniel Jones, recognised the power of smart builders coupled with deep proficiencies in AI & automation tools. These were individuals highly-adept in automation technologies like smart contracts and tools like Cursor, Zapier & ChatGPT yielding levels of output that were unmatched. Outside of this frontier industry, however, the same level of efficiency was hard to find. Conversations with global tech employers revealed a growing frustration: 7 out 10 HR leaders were struggling to find talent with AI & automation fluency. Scaleups and enterprises had digital transformation and cost-cutting mandates, but lacked the process to source, vet and onboard high-impact candidates capable of driving measurable change. Calyptus is closing that gap. The platform filters for curiosity, learning agility, and real-world impact. Every candidate is vetted through a structured AI-interview process, progressing to an assessment process to test their proficiency in using real-world AI & automation tools. Candidates who pass are instantly accessible to a wide-range of top employers competing to hire them. The result is a hiring engine built around verified productivity instead of paper credentials. Companies hiring this new wave of AI-native talent report: 66% higher team output 20% reduction in headcount costs 21% increase in profitability With Calyptus' platform helping teams source these candidates 4x faster than traditional hiring methods, with an average time-to-hire of 17 days. Their new automated hiring platform is now open for candidates across technical, marketing, and commercial roles. 50+ employers are currently on the waitlist and will be joining the platform from late-July to start connecting with top candidates. 'The shift we're seeing is clear: AI fluency has become a stronger signal of success than experience alone,' said Dan Jones, CEO at Calyptus. 'We built Calyptus to surface people who can deliver results in this new era of talent'. 'Calyptus is the first hiring platform that's intrinsically aligned with today's objectives: productivity and profitability' said Darren Thayre, Head of Innovation & AI Partnerships at Google. 'AI is the new literacy and Calyptus is vetting at scale for leading startups and enterprises where these processes do not exist.' As demand for lean, efficient, and AI-native teams grows across industries, Calyptus has positioned itself as the infrastructure layer for a new kind of hiring that values outcomes over optics and proof over promises. If you're a candidate you can sign up here . If you're an employer you can join the waitlist here .


See - Sada Elbalad
2 days ago
- See - Sada Elbalad
Elon Musk Contacts Lebanese President, Expresses Interest in Telecom and Internet Investment
Taarek Refaat Lebanese President General Joseph Aoun received a phone call from Elon Musk, CEO of SpaceX, during which Musk expressed strong interest in Lebanon's telecommunications and internet sectors. According to an official statement from the Lebanese Presidency, Musk also conveyed a desire for his companies to have a presence in Lebanon—an initiative President Aoun welcomed, affirming the government's readiness to offer all possible facilitations within the framework of Lebanese laws and regulations. At the conclusion of the call, President Aoun extended an invitation to Musk to visit Beirut, which the billionaire entrepreneur graciously accepted in principle, promising to make the trip at the first suitable opportunity. The conversation comes at a time when Lebanon is actively seeking ways to emerge from its prolonged economic crisis. With the recent lifting of sanctions on Syria, new regional economic corridors are expected to open, particularly around reconstruction and infrastructure development—offering Lebanon potential roles and benefits, especially in sectors like energy, logistics, and technology. Musk's outreach, if followed by concrete action, could signal a new chapter in Lebanon's efforts to attract high-profile international investors. read more CBE: Deposits in Local Currency Hit EGP 5.25 Trillion Morocco Plans to Spend $1 Billion to Mitigate Drought Effect Gov't Approves Final Version of State Ownership Policy Document Egypt's Economy Expected to Grow 5% by the end of 2022/23- Minister Qatar Agrees to Supply Germany with LNG for 15 Years Business Oil Prices Descend amid Anticipation of Additional US Strategic Petroleum Reserves Business Suez Canal Records $704 Million, Historically Highest Monthly Revenue Business Egypt's Stock Exchange Earns EGP 4.9 Billion on Tuesday Business Wheat delivery season commences on April 15 News China Launches Largest Ever Aircraft Carrier Sports Former Al Zamalek Player Ibrahim Shika Passes away after Long Battle with Cancer Videos & Features Tragedy Overshadows MC Alger Championship Celebration: One Fan Dead, 11 Injured After Stadium Fall Lifestyle Get to Know 2025 Eid Al Adha Prayer Times in Egypt Business Fear & Greed Index Plummets to Lowest Level Ever Recorded amid Global Trade War Arts & Culture Zahi Hawass: Claims of Columns Beneath the Pyramid of Khafre Are Lies News Flights suspended at Port Sudan Airport after Drone Attacks Videos & Features Video: Trending Lifestyle TikToker Valeria Márquez Shot Dead during Live Stream News Shell Unveils Cost-Cutting, LNG Growth Plan Technology 50-Year Soviet Spacecraft 'Kosmos 482' Crashes into Indian Ocean