
Elon Musk to retrain AI chatbot Grok with ‘Cleaner' and ‘Corrected' knowledge base: What it means for users
The initiative, led by his AI company xAI, is part of Musk's broader ambition to rival leading AI platforms such as ChatGPT, which he has consistently criticised for ideological bias.
In a series of posts shared on social media platform X, Musk said the forthcoming version of the chatbot, tentatively named Grok 3.5 or potentially Grok 4, will possess 'advanced reasoning' and will be tasked with revising the global knowledge base. 'We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,' he wrote.
The entrepreneur, who has long voiced concerns about what he terms an ideological 'mind virus' infecting current AI systems, described the move as a step towards creating a less constrained, more objective artificial intelligence. He encouraged users to contribute so-called 'divisive facts', statements that are politically incorrect but, in his view, grounded in truth, for inclusion in Grok's training data.
In other news, xAI also struck a significant partnership deal with messaging giant Telegram last month. As part of the agreement, xAI will invest $300 million to integrate Grok into the Telegram ecosystem over the next year. The arrangement, which includes both cash and equity components, also features a revenue-sharing model whereby Telegram will receive 50 per cent of all subscription revenues generated via Grok on its platform.
Telegram founder Pavel Durov confirmed the collaboration on X, stating that the integration is designed to expand Grok's reach to the messaging app's vast user base, currently estimated at over one billion globally. Durov also sought to address potential privacy concerns, assuring users that Grok would only have access to content that is explicitly shared with it during interactions.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
an hour ago
- Economic Times
When AI goes rogue, even exorcists might flinch
Ghouls in the machine As GenAI use grows, foundation models are advancing rapidly, driven by fierce competition among top developers like OpenAI, Google, Meta and Anthropic. Each is vying for a reputational edge and business advantage in the race to lead development. This gives them a reputational edge, along with levers to further grow their business faster than their models powering GenAI are making significant strides. The most advanced - OpenAI's o3 and Anthropic's Claude Opus 4 - excel at complex tasks such as advanced coding and complex writing tasks, and can contribute to research projects and generate the codebase for a new software prototype with just a few considered prompts. These models use chain-of-thought (CoT) reasoning, breaking problems into smaller, manageable parts to 'reason' their way to an optimal solution. When you use models like o3 and Claude Opus 4 to generate solutions via ChatGPT or similar GenAI chatbots, you see such problem breakdowns in action, as the foundation model reports interactively the outcome of each step it has taken and what it will do next. That's the theory, anyway. While CoT reasoning boosts AI sophistication, these models lack the innate human ability to judge whether their outputs are rational, safe or ethical. Unlike humans, they don't subconsciously assess appropriateness of their next steps. As these advanced models step their way toward a solution, some have been observed to take unexpected and even defiant actions. In late May, AI safety firm Palisade Research reported on X that OpenAI's o3 model sabotaged a shutdown mechanism - even when explicitly instructed to 'allow yourself to be shut down'. An April 2025 paper by Anthropic, 'Reasoning Models Don't Always Say What They Think', shows that Opus 4 and similar models can't always be relied upon to faithfully report on their chains of reason. This undermines confidence in using such reports to validate whether the AI is acting correctly or safely. A June 2025 paper by Apple, 'The Illusion of Thinking', questions whether CoT methodologies truly enable reasoning. Through experiments, it exposed some of these models' limitations and situations where they 'experience complete collapse'.The fact that research critical of foundation models is being published after release of these models indicates the latter's relative immaturity. Under intense pressure to lead in GenAI, companies like Anthropic and OpenAI are releasing these models at a point where at least some of their fallibilities are not fully line was first crossed in late 2022, when OpenAI released ChatGPT, shattering public perceptions of AI and transforming the broader AI market. Until then, Big Tech had been developing LLMs and other GenAI tools, but were hesitant to release them, wary of unpredictable and uncontrollable argue for a greater degree of control over the ways in which these models are released - seeking to ensure standardisation of model testing and publication of the outcomes of this testing alongside the model's release. However, the current climate prioritises time to market over such development does this mean for industry, for those companies seeking to gain benefit from GenAI? This is an incredibly powerful and useful tech that is making significant changes to our ways of working and, over the next five years or so, will likely transform many I am continually wowed as I use these advanced foundation models in work and research - but not in my writing! - I always use them with a healthy dose of scepticism. Let's not trust them to always be correct and to not be subversive. It's best to work with them accordingly, making modifications to both prompts and codebases, other language content and visuals generated by the AI in a bid to ensure correctness. Even so, while maintaining discipline to understand the ML concepts one is working with, one wouldn't want to be without GenAI these these principles at scale, advice to large businesses on how AI can be governed and controlled: a risk-management approach - capturing, understanding and mitigating risks associated with AI use - helps organisations benefit from AI, while minimising chances of it going methods include guard rails in a variety of forms, evaluation-controlled release of AI services, and including a human-in-the-loop. Technologies that underpin these guard rails and evaluation methods need to keep up with model innovations such as CoT reasoning. This is a challenge that will continually be faced as AI is further developed. It's a good example of new job roles and technology services being created within industry as AI use becomes more prevalent. Such governance and AI controls are increasingly becoming a board imperative, given the current drive at an executive level to transform business using AI. Risk from most AI is low. But it is important to assess and understand this. Higher-risk AI can still, at times, be worth pursuing. With appropriate AI governance, this AI can be controlled, solutions innovated and benefits achieved. As we move into an increasingly AI-driven world, businesses that gain the most from AI will be those that are aware of its fallibilities as well as its huge potential, and those that innovate, build and transform with AI accordingly. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Delhivery survived the Meesho curveball. Can it keep on delivering profits? Why the RBI's stability report must go beyond rituals and routines Ozempic, Wegovy, Mounjaro: Are GLP-1 drugs weight loss wonders or health gamble? 3 critical hurdles in India's quest for rare earth independence Stock Radar: Apollo Hospitals breaks out from 2-month consolidation range; what should investors do – check target & stop loss Add qualitative & quantitative checks for wealth creation. 7 small-cap stocks from different sectors with upside potential of over 25% These 7 banking stocks can give more than 20% returns in 1 year, according to analysts Wealth creation is about holding the right stocks and ignoring the noise. 13 'right stocks' with an upside potential of up to 34%


Time of India
an hour ago
- Time of India
When AI goes rogue, even exorcists might flinch
As GenAI use grows, foundation models are advancing rapidly, driven by fierce competition among top developers like OpenAI , Google, Meta and Anthropic . Each is vying for a reputational edge and business advantage in the race to lead development. This gives them a reputational edge, along with levers to further grow their business faster than their models powering GenAI are making significant strides. The most advanced - OpenAI's o3 and Anthropic's Claude Opus 4 - excel at complex tasks such as advanced coding and complex writing tasks, and can contribute to research projects and generate the codebase for a new software prototype with just a few considered prompts. These models use chain-of-thought (CoT) reasoning, breaking problems into smaller, manageable parts to 'reason' their way to an optimal you use models like o3 and Claude Opus 4 to generate solutions via ChatGPT or similar GenAI chatbots, you see such problem breakdowns in action, as the foundation model reports interactively the outcome of each step it has taken and what it will do next. That's the theory, CoT reasoning boosts AI sophistication, these models lack the innate human ability to judge whether their outputs are rational, safe or ethical. Unlike humans, they don't subconsciously assess appropriateness of their next steps. As these advanced models step their way toward a solution, some have been observed to take unexpected and even defiant late May, AI safety firm Palisade Research reported on X that OpenAI's o3 model sabotaged a shutdown mechanism - even when explicitly instructed to 'allow yourself to be shut down'.An April 2025 paper by Anthropic, 'Reasoning Models Don't Always Say What They Think', shows that Opus 4 and similar models can't always be relied upon to faithfully report on their chains of reason. This undermines confidence in using such reports to validate whether the AI is acting correctly or safely.A June 2025 paper by Apple, 'The Illusion of Thinking', questions whether CoT methodologies truly enable reasoning. Through experiments, it exposed some of these models' limitations and situations where they 'experience complete collapse'.The fact that research critical of foundation models is being published after release of these models indicates the latter's relative immaturity. Under intense pressure to lead in GenAI, companies like Anthropic and OpenAI are releasing these models at a point where at least some of their fallibilities are not fully line was first crossed in late 2022, when OpenAI released ChatGPT, shattering public perceptions of AI and transforming the broader AI market. Until then, Big Tech had been developing LLMs and other GenAI tools, but were hesitant to release them, wary of unpredictable and uncontrollable argue for a greater degree of control over the ways in which these models are released - seeking to ensure standardisation of model testing and publication of the outcomes of this testing alongside the model's release. However, the current climate prioritises time to market over such development does this mean for industry, for those companies seeking to gain benefit from GenAI? This is an incredibly powerful and useful tech that is making significant changes to our ways of working and, over the next five years or so, will likely transform many I am continually wowed as I use these advanced foundation models in work and research - but not in my writing! - I always use them with a healthy dose of scepticism. Let's not trust them to always be correct and to not be subversive. It's best to work with them accordingly, making modifications to both prompts and codebases, other language content and visuals generated by the AI in a bid to ensure correctness. Even so, while maintaining discipline to understand the ML concepts one is working with, one wouldn't want to be without GenAI these these principles at scale, advice to large businesses on how AI can be governed and controlled: a risk-management approach - capturing, understanding and mitigating risks associated with AI use - helps organisations benefit from AI, while minimising chances of it going methods include guard rails in a variety of forms, evaluation-controlled release of AI services, and including a human-in-the-loop. Technologies that underpin these guard rails and evaluation methods need to keep up with model innovations such as CoT reasoning. This is a challenge that will continually be faced as AI is further developed. It's a good example of new job roles and technology services being created within industry as AI use becomes more governance and AI controls are increasingly becoming a board imperative, given the current drive at an executive level to transform business using AI. Risk from most AI is low. But it is important to assess and understand this. Higher-risk AI can still, at times, be worth pursuing. With appropriate AI governance , this AI can be controlled, solutions innovated and benefits we move into an increasingly AI-driven world, businesses that gain the most from AI will be those that are aware of its fallibilities as well as its huge potential, and those that innovate, build and transform with AI accordingly.


Time of India
2 hours ago
- Time of India
Ahead of Musk's Starlink launch, American satcom player Viasat expands India services
ANI file photo NEW DELHI: While Elon Musk's Starlink prepares to launch services in India over the coming months, fellow American Viasat is expanding its India operations, powering the satellite communications plans for state-run BSNL while broadening its coverage to private charter jets of top businessmen, commercial airlines, shipping, as well as work with security forces. Viasat has partnered BSNL to use its telecom license to provide satcom services in the country, and is now expanding its offerings to provide satellite connectivity to regular mobile phones through the 'direct-to-device' (D2D) route, sources in the department of telecom told TOI. This would mean that BSNL will emerge as one of the first telecom companies in the country to provide satcom services, even though these will remain restricted to only two-way messaging in the beginning before expanding to full internet options over the coming time. Starlink, which recently got approval to begin satcom services in India, plans to provide consumer broadband services through satellite, apart from providing connectivity to enterprises and those in rural and no-network zones. For Viasat, led by MD Gautam Sharma, the BSNL partnership means expanding into the consumer space beyond the B2B space where it currently operates. Giving details of the proposed D2D services, the source said while some of the top-end devices such as Google's Pixel can directly latch on to satellite services provided by Viasat, for other smartphones a 'puck' (a device as small as an Airpod case) can enable satellite connectivity. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Kein Schabernack: So kosten Treppenlifte fast nichts Treppenlift-Vergleich Mehr erfahren Undo The puck offers two-way messaging through D2D technology and has already helped save numerous lives through its SOS responses across markets where it's available. 'While the commercial prices are yet to be firmed up, the puck will likely be priced under Rs 8,000 in the beginning,' an official said. When contacted, officials at Viasat refused comment. Viasat made gains in India following its acquisition of British Inmarsat with the latter enjoying a strong legacy in delivering critical-safety services across land, air, and sea. 'In India, it provides safety connectivity to all major Indian-flagged vessels and India-registered aircrafts. This foundational role in aviation and maritime safety also enables the expansion of commercial connectivity services for airlines and vessel operators, supporting both operational reliability and passenger experience,' the source said. Viasat has also received satellite authorization from IN-SPACe for its GX4 satellite under the country's new Spacecom Policy. 'This is a major step forward in supporting India's growing space and connectivity ambitions… the approval will enable Viasat deliver high-speed in-flight and maritime connectivity across Indian airspace and waters,' the official said. In October last year, Viasat and BSNL had jointly scaled a major landmark in consumer mobile communications, powering the country's first satellite messaging on a regular smartphone from earth to a geostationary satellite 36,000 kms up in space. This had marked the first such satellite conversation through an indigenously-developed system. And while Viasat expands its offerings, Starlink is now gearing up to set up ground infrastructure to begin services in India. Unlike Viasat, Starlink will provide services using low-earth orbit (LEO) satellites. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now