Latest news with #Grok4


Egypt Independent
4 days ago
- Business
- Egypt Independent
Elon Musk isn't happy with his AI chatbot. Experts worry he's trying to make Grok 4 in his image
Musk was not pleased. 'Major fail, as this is objectively false. Grok is parroting legacy media,' Musk wrote, even though Grok cited data from government sources such as the Department of Homeland Security. Within three days, Musk promised to deliver a major Grok update that would 'rewrite the entire corpus of human knowledge,' calling on X users to send in 'divisive facts' that are 'politically incorrect, but nonetheless factually true' to help train the model. 'Far too much garbage in any foundation model trained on uncorrected data,' he wrote. On Friday, Musk announced the new model, called Grok 4, will be released just after July 4th. The exchanges, and others like it, raises concerns that the world's richest man may be trying to influence Grok to follow his own worldview – potentially leading to more errors and glitches, and surfacing important questions about bias, according to experts. AI is expected to shape the way people work, communicate and find information, and it's already impacting areas such as software development, healthcare and education. And the decisions that powerful figures like Musk make about the technology's development could be critical. Especially considering Grok is integrated into one of the world's most popular social networks – and one where the old guardrails around the spread of misinformation have been removed. While Grok may not be as popular as OpenAI's ChatGPT, its inclusion in Musk's social media platform X has put it in front of a massive digital audience. 'This is really the beginning of a long fight that is going to play out over the course of many years about whether AI systems should be required to produce factual information, or whether their makers can just simply tip the scales in the favor of their political preferences if they want to,' said David Evan Harris, an AI researcher and lecturer at UC Berkeley who previously worked on Meta's Responsible AI team. A source familiar with the situation told CNN that Musk's advisers have told him Grok 'can't just be molded' into his own point of view, and that he understands that. xAI did not respond to a request for comment. Concerns about Grok following Musk's views For months, users have questioned whether Musk has been tipping Grok to reflect his worldview. In May, the chatbot randomly brought up claims of a white genocide in South Africa in responses to completely unrelated queries. In some responses, Grok said it was 'instructed to accept as real white genocide in South Africa'. Musk was born and raised in South Africa and has a history of arguing that a 'white genocide' has been committed in the nation. A few days later, xAI said an 'unauthorized modification' in the extremely early morning hours Pacific time pushed the AI chatbot to 'provide a specific response on a political topic' that violates xAI's policies. As Musk directs his team to retrain Grok, others in the AI large language model space like Cohere co-founder Nick Frosst believe Musk is trying to create a model that pushes his own viewpoints. 'He's trying to make a model that reflects the things he believes. That will certainly make it a worse model for users, unless they happen to believe everything he believes and only care about it parroting those things,' Frosst said. What it would take to re-train Grok It's common for AI companies like OpenAI, Meta and Google to constantly update their models to improve performance, according to Frosst. But retraining a model from scratch to 'remove all the things (Musk) doesn't like' would take a lot of time and money – not to mention degrade the user experience – Frosst said. 'And that would make it almost certainly worse,' Frosst said. 'Because it would be removing a lot of data and adding in a bias.' A Grok account on X is displayed on a phone screen. Jakub Porzycki/NurPhoto/Shutterstock Another way to change a model's behavior without completely retraining it is to insert prompts and adjust what are called weights within the model's code. This process could be faster than totally retraining the model since it retains its existing knowledge base. Prompting would entail instructing a model to respond to certain queries in a specific way, whereas weights influence an AI model's decision-making process. Dan Neely, CEO of Vermillio which helps protect celebrities from AI-generated deepfakes, told CNN that xAI could adjust Grok's weights and data labels in specific areas and topics. 'They will use the weights and labeling they have previously in the places that they are seeing (as) kind of problem areas,' Neely said. 'They will simply go into doing greater level of detail around those specific areas.' Musk didn't detail the changes coming in Grok 4, but did say it will use a 'specialized coding model.' Bias in AI Musk has said his AI chatbot will be 'maximally truth seeking,' but all AI models have some bias baked in because they are influenced by humans who make choices about what goes into the training data. 'AI doesn't have all the data that it should have. When given all the data, it should ultimately be able to give a representation of what's happening,' Neely said. 'However, lots of the content that exists on the internet already has a certain bent, whether you agree with it or not.' It's possible that in the future, people will choose their AI assistant based on its worldview. But Frosst said he believes AI assistants known to have a particular perspective will be less popular and useful. 'For the most part, people don't go to a language model to have ideology repeated back to them, that doesn't really add value,' he said. 'You go to a language model to get it to do with do something for you, do a task for you.' Ultimately, Neely said he believes authoritative sources will end up rising back to the top as people seek places they can trust. But 'the journey to get there is very painful, very confusing,' Neely said and 'arguably, has some threats to democracy.'


Arabian Post
25-06-2025
- Business
- Arabian Post
Musk Lays Claim to Redefine Human Knowledge with AI
Elon Musk has disclosed plans to overhaul xAI's conversational system Grok by essentially reconstructing its entire knowledge foundation. Frustrated with what he describes as 'garbage' and 'uncorrected data' in the model, Musk intends to launch Grok 3.5—potentially rebranded as Grok 4—with enhanced reasoning capabilities that will first re-write the entire corpus of human knowledge before retraining the model on that curated dataset. Musk wilted no words on X, characterising the endeavour as necessary to purge errors and integrate missing information—a process he says will counter the mainstream constraints he believes afflict existing AI systems. He also solicited 'divisive facts' from users—material that is politically incorrect yet supposedly factual—to enrich training, a move that elicited responses including Holocaust denial claims and conspiracy narratives. Experts have raised alarms about the proposal. Gary Marcus, professor emeritus at New York University, warned that the plan evokes a totalitarian impulse, likening it to Orwellian efforts to rewrite history for ideological alignment. Other ethicists emphasise that any attempt to curate a knowledge base to reflect particular values risks embedding hard‑to‑detect bias through subtle manipulation—what some describe as 'data poisoning'—even more insidiously than overt interventions. ADVERTISEMENT Grok's performance history reveals why Musk may feel compelled to act. Earlier this year, an 'unauthorised modification' led the model to spontaneously reference a conspiracy theory known as 'white genocide' in South Africa—often in contexts unrelated to the topic—raising significant concerns about its reliability. That glitch prompted xAI to launch an internal review and reinforce measures to increase the bot's transparency and stability. Institutional interest in Grok continues despite these setbacks. Sources told Reuters that entities such as the US Department of Homeland Security have been testing the system for data analysis and reporting, though officials clarified no formal endorsement has been issued. The proposed timeline for deploying Grok 3.5 or Grok 4 is expected by late 2025, with Musk pivoting xAI's effort away from public scrutiny and more towards curated, Musk‑aligned content. Critics caution that this shift could entrench a corporate agenda within the core of the AI, producing outputs that reflect ideological preferences rather than objective accuracy. This initiative occurs against a backdrop of broader AI regulation efforts. While governments wrestle with proposals ranging from state-level moratoria to risk-based frameworks, the question of how AI systems calibrate values remains contested. Musk's move intensifies that debate: will AI be a vessel for neutral knowledge, or a tool shaped—perhaps weaponised—by powerful individuals? The discussion now centers on transparency and accountability. Analysts argue that redefining a model's data foundation under the stewardship of a single corporate leader demands oversight mechanisms akin to those in utilities or public infrastructure. Ethical guidelines suggest dataset documentation, traceability, and multi‑stakeholder governance are essential to mitigate risks of ideological capture. Academic work on 'model disgorgement' offers technical approaches to remove or correct problematic knowledge, but experts emphasise that full transparency remains practically elusive at scale. Musk's declaration marks a turning point not just for Grok, but for the trajectory of AI governance. It anticipates a future in which elite designers may directly shape the content of civilisation's shared memory. As work begins on this ambitious rewrite, key questions emerge: who determines what qualifies as 'error'? Who adjudicates 'missing information'? And how will the public ensure that history remains a mosaic of perspectives, not a curated narrative?


Mint
23-06-2025
- Business
- Mint
Elon Musk to retrain AI chatbot Grok with ‘Cleaner' and ‘Corrected' knowledge base: What it means for users
Tech billionaire Elon Musk has announced a sweeping new direction for his artificial intelligence chatbot Grok, revealing plans to retrain the system using what he calls a cleaner and more accurate version of human knowledge. The initiative, led by his AI company xAI, is part of Musk's broader ambition to rival leading AI platforms such as ChatGPT, which he has consistently criticised for ideological bias. In a series of posts shared on social media platform X, Musk said the forthcoming version of the chatbot, tentatively named Grok 3.5 or potentially Grok 4, will possess 'advanced reasoning' and will be tasked with revising the global knowledge base. 'We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,' he wrote. The entrepreneur, who has long voiced concerns about what he terms an ideological 'mind virus' infecting current AI systems, described the move as a step towards creating a less constrained, more objective artificial intelligence. He encouraged users to contribute so-called 'divisive facts', statements that are politically incorrect but, in his view, grounded in truth, for inclusion in Grok's training data. In other news, xAI also struck a significant partnership deal with messaging giant Telegram last month. As part of the agreement, xAI will invest $300 million to integrate Grok into the Telegram ecosystem over the next year. The arrangement, which includes both cash and equity components, also features a revenue-sharing model whereby Telegram will receive 50 per cent of all subscription revenues generated via Grok on its platform. Telegram founder Pavel Durov confirmed the collaboration on X, stating that the integration is designed to expand Grok's reach to the messaging app's vast user base, currently estimated at over one billion globally. Durov also sought to address potential privacy concerns, assuring users that Grok would only have access to content that is explicitly shared with it during interactions.


Arabian Post
23-06-2025
- Business
- Arabian Post
Musk Orders Grok to Rebuild Human Knowledge from Ground Up
Elon Musk has cast aside the existing Grok AI training corpus and directed xAI to construct an entirely new foundation—one that excludes what he calls 'garbage' and 'uncorrected data'—and then retrain the model from that vetted base. This overhaul is intended to power the next-generation Grok, tentatively dubbed Grok 3.5 or Grok 4, with 'advanced reasoning' capabilities that Musk says will enable it to 'rewrite the entire corpus of human knowledge' by filling gaps and purging errors. Musk announced on X that the project will begin by having Grok itself reinterpret existing knowledge, after which the cleansed dataset will serve as the bedrock for retraining. He described current AI models—including his own—as riddled with flawed content, drawing a sharp contrast with 'woke' competitors such as OpenAI's ChatGPT. Musk further urged X users to submit 'divisive facts' that are politically incorrect yet factually accurate to guide and expand Grok's knowledge base. This initiative follows a series of missteps by Grok, including controversial responses that have embarrassed Musk. In mid-June, a user highlighted that Grok claimed right-wing violence in the United States was more frequent and lethal than left-wing violence—prompting Musk to label the response a 'major fail' and commit to fixing it. In May, an internal modification led to Grok repeatedly discussing alleged 'white genocide' in South Africa even when unrelated subjects were in focus. xAI acknowledged this was due to an unauthorised prompt change and pledged to review its internal controls. ADVERTISEMENT Grok debuted in November 2023 and underwent successive upgrades. The most recent iteration, Grok‑3, was launched in February 2025 and incorporated advanced capabilities, such as image analysis, document comprehension, and the ability to call web searches. Its performance across benchmarks for mathematics and scientific reasoning surpassed rival offerings. The new rebuild, however, signals a shift in training methodology: moving away from sprawling web-sourced datasets toward a carefully curated corpus vetted by both AI and human input. Reactions to Musk's pronouncement have been varied. NYU emeritus professor Gary Marcus criticised the proposition as a kind of 'Orwellian rewriting of history,' warning it risks cementing ideology under the guise of factual correction. Similarly, University of Milan logician Bernardino Sassoli de' Bianchi warned that allowing ideological filters to guide knowledge reconstruction 'is wrong on every conceivable level'. Defenders argue that AI models today frequently perpetuate misinformation, hallucinations, or ideological bias. Musk's pitch for a 'clean slate' approach—using Grok to vet its own training data—may offer a path to curtail falsehoods. Yet critics insist the process itself risks embedding bias. The invitation to collect 'division-worthy yet factually supported' statements may further amplify fringe claims under the guise of completeness. Beyond shaping the contours of the next Grok, Musk's announcement underscores xAI's ideological positioning. A central theme for Musk has been opposition to what he perceives as widespread 'wokeness' in mainstream AI platforms. He has previously eased content moderation on X and introduced community flagging features to support unfiltered discourse. His vision of Grok aligns with that broader mission: an AI unconstrained by conventional cultural norms while claiming to uphold factual fidelity. From a technical standpoint, reframing an AI model's training data is a formidable undertaking. It requires not only algorithmic retracing of source material but also rigorous vetting, human oversight, and transparent audit trails. Last month's controversy over unauthorized internal tweaks prompted xAI to promise stricter review protocols and to publish its system prompts publicly on GitHub. Whether this openness will remain amid future updates remains to be seen. ADVERTISEMENT The debate also taps into a broader philosophical dilemma: Can knowledge truly be cleansed of bias, or does the act of curation itself inscribe new lenses? Musk's critics frame the proposal as a power play with historical narrative, while his supporters cast it as an effort to purge misinformation from AI systems. The stakes are high: an AI trained on a newly authored corpus could reshape public discourse, educational pathways, and even policy-making tools. As xAI staff prepare to execute this ambitious vision, the process and its substance will be closely watched. Will Grok 3.5 earn a reputation for rigorous analysis and factual depth? Or will it encounter fresh controversy as its knowledge base is reconstructed under Musk's watchful eye? One thing is certain: Grok's next chapter promises to redefine how artificial intelligence models are built and trusted.


Time of India
22-06-2025
- Business
- Time of India
Elon Musk wants to retrain XAI's chatbot Grok to clear 'ChatGPT's woke' and .... Garbage
Representative Image Elon Musk the founder of xAI has said that he will retrain his artificial intelligence chatbot and ChatGPT rival Grok . Musk took to X (formerly known as Twitter) and shared a post stating that he will be removing what he terms "ChatGPT's woke" biases and other "garbage" from the foundational knowledge of Grok. Elon Musk to retrain chatbot Grok In a series of posts shared on X, Elon Musk announced that the upcoming version of Grok likely to be called as Grok 4 will trained on the revised information curated by Grok 3.5's advanced reasoning capabilities. 'We will use Grok 3.5… to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,' Musk wrote, adding that current AI models are trained on 'far too much garbage'. This move from Elon Musk comes after his repeated criticism of rival AI model — ChatGPT. Musk has criticised ChatGPT for what he perceives as a "woke mind virus" or ideological slant in their responses. Musk has also asked the users to submit their 'divisive facts' which will be used in the retraining of Grok. 'Please reply to this post with divisive facts for @Grok training. By this I mean things that are politically incorrect, but nonetheless factually true', wrote Musk. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch vàng CFDs với mức chênh lệch giá thấp nhất IC Markets Đăng ký Undo Elon Musk's xAI issues clarification on Grok's responses on white genocide Recently, some X users reported that Grok repeatedly generated responses referring to the theory of "white genocide" in South Africa. Users who tagged @grok in posts about sports, entertainment, and general topics received replies discussing racial violence in South Africa, including references to the anti-apartheid chant 'Kill the Boer'. After this, Elon Musk's xAI issued a clarification for this incident. In a statement shared on X, xAI said that the modification in Grok violated the internal policies and core values, leading the chatbot to repeatedly reference politically sensitive topics. The company stated that the change was detected and reversed promptly, though it did not disclose who was responsible for the alteration.