Latest news with #Grok3.5


Arabian Post
6 days ago
- Business
- Arabian Post
Musk Lays Claim to Redefine Human Knowledge with AI
Elon Musk has disclosed plans to overhaul xAI's conversational system Grok by essentially reconstructing its entire knowledge foundation. Frustrated with what he describes as 'garbage' and 'uncorrected data' in the model, Musk intends to launch Grok 3.5—potentially rebranded as Grok 4—with enhanced reasoning capabilities that will first re-write the entire corpus of human knowledge before retraining the model on that curated dataset. Musk wilted no words on X, characterising the endeavour as necessary to purge errors and integrate missing information—a process he says will counter the mainstream constraints he believes afflict existing AI systems. He also solicited 'divisive facts' from users—material that is politically incorrect yet supposedly factual—to enrich training, a move that elicited responses including Holocaust denial claims and conspiracy narratives. Experts have raised alarms about the proposal. Gary Marcus, professor emeritus at New York University, warned that the plan evokes a totalitarian impulse, likening it to Orwellian efforts to rewrite history for ideological alignment. Other ethicists emphasise that any attempt to curate a knowledge base to reflect particular values risks embedding hard‑to‑detect bias through subtle manipulation—what some describe as 'data poisoning'—even more insidiously than overt interventions. ADVERTISEMENT Grok's performance history reveals why Musk may feel compelled to act. Earlier this year, an 'unauthorised modification' led the model to spontaneously reference a conspiracy theory known as 'white genocide' in South Africa—often in contexts unrelated to the topic—raising significant concerns about its reliability. That glitch prompted xAI to launch an internal review and reinforce measures to increase the bot's transparency and stability. Institutional interest in Grok continues despite these setbacks. Sources told Reuters that entities such as the US Department of Homeland Security have been testing the system for data analysis and reporting, though officials clarified no formal endorsement has been issued. The proposed timeline for deploying Grok 3.5 or Grok 4 is expected by late 2025, with Musk pivoting xAI's effort away from public scrutiny and more towards curated, Musk‑aligned content. Critics caution that this shift could entrench a corporate agenda within the core of the AI, producing outputs that reflect ideological preferences rather than objective accuracy. This initiative occurs against a backdrop of broader AI regulation efforts. While governments wrestle with proposals ranging from state-level moratoria to risk-based frameworks, the question of how AI systems calibrate values remains contested. Musk's move intensifies that debate: will AI be a vessel for neutral knowledge, or a tool shaped—perhaps weaponised—by powerful individuals? The discussion now centers on transparency and accountability. Analysts argue that redefining a model's data foundation under the stewardship of a single corporate leader demands oversight mechanisms akin to those in utilities or public infrastructure. Ethical guidelines suggest dataset documentation, traceability, and multi‑stakeholder governance are essential to mitigate risks of ideological capture. Academic work on 'model disgorgement' offers technical approaches to remove or correct problematic knowledge, but experts emphasise that full transparency remains practically elusive at scale. Musk's declaration marks a turning point not just for Grok, but for the trajectory of AI governance. It anticipates a future in which elite designers may directly shape the content of civilisation's shared memory. As work begins on this ambitious rewrite, key questions emerge: who determines what qualifies as 'error'? Who adjudicates 'missing information'? And how will the public ensure that history remains a mosaic of perspectives, not a curated narrative?


Axios
24-06-2025
- Business
- Axios
Elon Musk wants to put his thumb on the AI scale
Elon Musk still isn't happy with how his AI platform answers divisive questions, pledging in recent days to retrain Grok so it will answer in ways more to his liking. Why it matters: Efforts to steer AI in particular directions could exacerbate the danger of a technology already known for its convincing but inaccurate hallucinations. The big picture: Expect to see more of this in the future as governments and businesses may choose or even create their own AI models that try to sway generated responses on everything from LGBTQ rights to territorial disputes. Driving the news: In a series of tweets over the past week, Musk has expressed frustration at the ways Grok was answering questions and suggested an extensive effort to put his thumb on the scale. "We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors," Musk wrote. "Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data." Musk also put out a call for people to suggest things that are "divisive facts," adding that he meant things that are "politically incorrect, but nonetheless factually true." The suggestions, though, included examples of Holocaust denialism and other conspiracy theories. An xAI representative did not immediately respond to a request for comment. Reality check: AI models are already hallucinating in ways that suggest failed attempts by company staff to manipulate outputs. Last month, Grok started injecting references to "white genocide" in South Africa to unrelated conversations, which the company later attributed to an "unauthorized change" to the company's system. At the other end of the political spectrum, Google and Meta appeared to make an effort to correct for a lack of diversity in image training data, which resulted in AI generated images of Black founding fathers and racially diverse Nazis. Between the lines: These early stumbles highlight the challenges of tweaking large language models, but researchers say there are more sophisticated ways to inject preferences that could be both more pervasive and harder to detect. The most obvious way is to change the data that models are trained on, focusing on data sources that align with one's goals. "That would be fairly expensive but I wouldn't put them past them to try," says AI researcher and Humane Intelligence CEO Rumman Chowdhury, who worked at Twitter until Musk dismissed her in November 2022. AI makers can also adjust models in post-training, using human feedback to reward answers that reflect the desired output. A third way is through distillation, a popular process for creating smaller models based on larger ones. Creators could take the knowledge of one model and create a smaller one that aims to offer an ideological twist on the larger one. What they're saying: AI ethicists say that the issue is broader than just Musk and Grok, with many companies exploring how they can tweak answers to appeal to users, regulators and other constituencies. "These conversations are already happening," Chowdhury said. "Elon is just dumb enough to say the quiet part out loud." Chowdhury said Musk's comments should be a wake up call that AI models are in the hands of a few companies with their own set of incentives that may differ from those of the people using their services. "There's no neutral economic structure," Chowdhury said, suggesting that rather than asking companies to "do good" or "be good," perhaps powerful AI models should be treated similar to utilities. Yes, but: It's also not the case that current AI — or any generative AI really — can be free from bias. The training data reflects biases based on whose perspectives are over or underrepresented. There's also a host of decisions large and small made by model creators as well as other variables. Meta, for example, recently said it wants to remove bias from its large language models, but experts say that's more about catering to conservatives than achieving some breakthrough in model neutrality.


Gulf Insider
24-06-2025
- Business
- Gulf Insider
Musk Wants Grok AI to Rewrite All Human Knowledge
Elon Musk says his artificial intelligence company xAI will retrain its AI model, Grok, on a new knowledge base free of 'garbage' and 'uncorrected data' — by first using it to rewrite history. In an X post on Saturday, Musk said the upcoming Grok 3.5 model will have 'advanced reasoning' and wanted it to be used 'to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.'He said the model would then retrain on the new knowledge set, claiming there was 'far too much garbage in any foundation model trained on uncorrected data.' Source: Elon Musk Musk has long claimed that rival AI models, such as ChatGPT from OpenAI, a firm he co-founded, are biased and omit information that is not politically correct. For years, Musk has looked to shape products to be free from what he considers to be damaging political correctness and has aimed to make Grok what he calls 'anti-woke.' He also relaxed Twitter's content and misinformation moderation when he took over in 2022, which saw the platform flooded with unchecked conspiracy theories, extremist content and fake news, some of which was spread by Musk himself. Musk aimed to fight the tide of misinformation by implementing a 'Community Notes' feature, allowing X users to debunk or add context to posts that show prominently under offending posts. Musk's post attracted condemnation from his critics, including from Gary Marcus, an AI startup founder and New York University professor emeritus of neural science who compared the billionaire's plan to a dystopia. 'Straight out of 1984,' Marcus wrote on X. 'You couldn't get Grok to align with your own personal beliefs so you are going to rewrite history to make it conform to your views.' Source: Gary Marcus Bernardino Sassoli de' Bianchi, a University of Milan professor of logic and science philosophy, wrote on LinkedIn that he was 'at a loss of words to comment on how dangerous' Musk's plan is. 'When powerful billionaires treat history as malleable simply because outcomes don't align with their beliefs, we're no longer dealing with innovation — we're facing narrative control,' he added. 'Rewriting training data to match ideology is wrong on every conceivable level.' As part of his effort to overhaul Grok, Musk called on X users to share 'divisive facts' to train the bot, specifying they should be 'politically incorrect, but nonetheless factually true.' The replies saw a variety of conspiracy theories and debunked extremist claims, including Holocaust distortion, debunked vaccine misinformation, racist pseudoscientific claims regarding intelligence and climate change denial. Also read: 'We're Seeing Heavy Traffic': Musk's Grok Chatbot Tops No. 1 On App Store, Overtaking ChatGPT & TikTok


Mint
23-06-2025
- Business
- Mint
Elon Musk to retrain AI chatbot Grok with ‘Cleaner' and ‘Corrected' knowledge base: What it means for users
Tech billionaire Elon Musk has announced a sweeping new direction for his artificial intelligence chatbot Grok, revealing plans to retrain the system using what he calls a cleaner and more accurate version of human knowledge. The initiative, led by his AI company xAI, is part of Musk's broader ambition to rival leading AI platforms such as ChatGPT, which he has consistently criticised for ideological bias. In a series of posts shared on social media platform X, Musk said the forthcoming version of the chatbot, tentatively named Grok 3.5 or potentially Grok 4, will possess 'advanced reasoning' and will be tasked with revising the global knowledge base. 'We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,' he wrote. The entrepreneur, who has long voiced concerns about what he terms an ideological 'mind virus' infecting current AI systems, described the move as a step towards creating a less constrained, more objective artificial intelligence. He encouraged users to contribute so-called 'divisive facts', statements that are politically incorrect but, in his view, grounded in truth, for inclusion in Grok's training data. In other news, xAI also struck a significant partnership deal with messaging giant Telegram last month. As part of the agreement, xAI will invest $300 million to integrate Grok into the Telegram ecosystem over the next year. The arrangement, which includes both cash and equity components, also features a revenue-sharing model whereby Telegram will receive 50 per cent of all subscription revenues generated via Grok on its platform. Telegram founder Pavel Durov confirmed the collaboration on X, stating that the integration is designed to expand Grok's reach to the messaging app's vast user base, currently estimated at over one billion globally. Durov also sought to address potential privacy concerns, assuring users that Grok would only have access to content that is explicitly shared with it during interactions.


Arabian Post
23-06-2025
- Business
- Arabian Post
Musk Orders Grok to Rebuild Human Knowledge from Ground Up
Elon Musk has cast aside the existing Grok AI training corpus and directed xAI to construct an entirely new foundation—one that excludes what he calls 'garbage' and 'uncorrected data'—and then retrain the model from that vetted base. This overhaul is intended to power the next-generation Grok, tentatively dubbed Grok 3.5 or Grok 4, with 'advanced reasoning' capabilities that Musk says will enable it to 'rewrite the entire corpus of human knowledge' by filling gaps and purging errors. Musk announced on X that the project will begin by having Grok itself reinterpret existing knowledge, after which the cleansed dataset will serve as the bedrock for retraining. He described current AI models—including his own—as riddled with flawed content, drawing a sharp contrast with 'woke' competitors such as OpenAI's ChatGPT. Musk further urged X users to submit 'divisive facts' that are politically incorrect yet factually accurate to guide and expand Grok's knowledge base. This initiative follows a series of missteps by Grok, including controversial responses that have embarrassed Musk. In mid-June, a user highlighted that Grok claimed right-wing violence in the United States was more frequent and lethal than left-wing violence—prompting Musk to label the response a 'major fail' and commit to fixing it. In May, an internal modification led to Grok repeatedly discussing alleged 'white genocide' in South Africa even when unrelated subjects were in focus. xAI acknowledged this was due to an unauthorised prompt change and pledged to review its internal controls. ADVERTISEMENT Grok debuted in November 2023 and underwent successive upgrades. The most recent iteration, Grok‑3, was launched in February 2025 and incorporated advanced capabilities, such as image analysis, document comprehension, and the ability to call web searches. Its performance across benchmarks for mathematics and scientific reasoning surpassed rival offerings. The new rebuild, however, signals a shift in training methodology: moving away from sprawling web-sourced datasets toward a carefully curated corpus vetted by both AI and human input. Reactions to Musk's pronouncement have been varied. NYU emeritus professor Gary Marcus criticised the proposition as a kind of 'Orwellian rewriting of history,' warning it risks cementing ideology under the guise of factual correction. Similarly, University of Milan logician Bernardino Sassoli de' Bianchi warned that allowing ideological filters to guide knowledge reconstruction 'is wrong on every conceivable level'. Defenders argue that AI models today frequently perpetuate misinformation, hallucinations, or ideological bias. Musk's pitch for a 'clean slate' approach—using Grok to vet its own training data—may offer a path to curtail falsehoods. Yet critics insist the process itself risks embedding bias. The invitation to collect 'division-worthy yet factually supported' statements may further amplify fringe claims under the guise of completeness. Beyond shaping the contours of the next Grok, Musk's announcement underscores xAI's ideological positioning. A central theme for Musk has been opposition to what he perceives as widespread 'wokeness' in mainstream AI platforms. He has previously eased content moderation on X and introduced community flagging features to support unfiltered discourse. His vision of Grok aligns with that broader mission: an AI unconstrained by conventional cultural norms while claiming to uphold factual fidelity. From a technical standpoint, reframing an AI model's training data is a formidable undertaking. It requires not only algorithmic retracing of source material but also rigorous vetting, human oversight, and transparent audit trails. Last month's controversy over unauthorized internal tweaks prompted xAI to promise stricter review protocols and to publish its system prompts publicly on GitHub. Whether this openness will remain amid future updates remains to be seen. ADVERTISEMENT The debate also taps into a broader philosophical dilemma: Can knowledge truly be cleansed of bias, or does the act of curation itself inscribe new lenses? Musk's critics frame the proposal as a power play with historical narrative, while his supporters cast it as an effort to purge misinformation from AI systems. The stakes are high: an AI trained on a newly authored corpus could reshape public discourse, educational pathways, and even policy-making tools. As xAI staff prepare to execute this ambitious vision, the process and its substance will be closely watched. Will Grok 3.5 earn a reputation for rigorous analysis and factual depth? Or will it encounter fresh controversy as its knowledge base is reconstructed under Musk's watchful eye? One thing is certain: Grok's next chapter promises to redefine how artificial intelligence models are built and trusted.