logo
Elon Musk Vows to 'Rewrite Human Knowledge' Using Grok AI, Slams Existing AI Data as ‘Garbage'

Elon Musk Vows to 'Rewrite Human Knowledge' Using Grok AI, Slams Existing AI Data as ‘Garbage'

Hans India23-06-2025
Billionaire entrepreneur Elon Musk is setting his sights on an ambitious new goal for his AI company, xAI: rebuilding the entire corpus of human knowledge using the latest version of its AI chatbot, Grok.
In a series of posts on X (formerly Twitter), Musk criticized current AI models for being trained on what he called 'garbage' data and unveiled his plan to retrain Grok using a revised dataset. 'We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,' Musk shared. 'Then retrain on that. Far too much garbage in any foundation model trained on uncorrected data.'
Musk's goal is not just about refining Grok's capabilities—he wants to reshape how AI models are built, trained, and aligned with truth. Launched earlier this year, Grok 3 was introduced as the 'smartest AI on Earth,' boasting performance ten times stronger than its predecessor. The model is accessible via xAI's platforms, the Grok app, and to X Premium Plus subscribers.
One of the more controversial elements of Musk's announcement involves his call for user input. In an appeal to the X community, he invited followers to contribute 'divisive facts' to help train Grok—facts that may be politically incorrect but, as Musk emphasized, are 'nonetheless factually true.'
Musk founded xAI in 2023 to challenge established AI giants like OpenAI. He has often accused leading models, including ChatGPT, of harboring 'woke biases' and distorting facts to fit certain ideological perspectives. With Grok, Musk wants to break away from that mold and create an AI assistant grounded in what he considers cleaner, more accurate information.
At the core of Grok's development is xAI's Colossus supercomputer, a powerful system built in less than nine months using more than 100,000 hours of Nvidia GPU processing. Grok 3 uses synthetic data, reinforcement learning, and logic-driven techniques to minimize hallucinations—a common flaw where AI chatbots fabricate responses.
Now, as Musk and his team prepare to roll out Grok 3.5—or Grok 4—by the end of 2025, the focus is shifting toward using advanced reasoning and curated content to create a more reliable foundation for machine learning.
With this bold move, Musk is not just tweaking another chatbot. He's trying to challenge the entire approach the tech industry has taken toward artificial intelligence—and possibly redefine what AI knows as 'truth.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'
Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'

First Post

time40 minutes ago

  • First Post

Beware! Terrorists are studying our tools, adapting fast: ISIS-K reviews tech in ‘Khorasan'

In the summer of 2025, Issue 46 of ISIS-K-Linked English Language Web Magazine Voice of Khorasan', resurfaced online after months of silence. This time, it didn't lead with battle cries or terrorist poetry. Instead, the cover story read like a page from Wired or CNET: a side-by-side review of artificial intelligence chatbots. The article compared ChatGPT, Bing AI, Brave Leo, and China's DeepSeek. It warned readers that some of these models stored user data, logged IP addresses, or relied on Western servers vulnerable to surveillance. Brave Leo, integrated into a privacy-first browser and not requiring login credentials, was ultimately declared the winner: the best chatbot for maintaining operational anonymity. STORY CONTINUES BELOW THIS AD For a terrorist group, this was an unexpected shift in tone, almost clinical. But beneath the surface was something far more chilling: a glimpse into how terrorist organisations are evolving in real time, studying the tools of the digital age and adapting them to spread chaos with precision. This wasn't ISIS's first brush with AI. Back in 2023, a pro-Islamic State support network circulated a 17-page 'AI Tech Support Guide' on secure usage of generative tools. It detailed how to use VPNs with language models, how to scrub AI-generated images of metadata, and how to reword prompts to bypass safety filters. For the group's propaganda arms, large language models (LLMs) weren't just novelty, they were utility. By 2024, these experiments bore fruit. A series of ISIS-K videos began appearing on encrypted Telegram channels featuring what appeared to be professional news anchors calmly reading the terrorist group's claims of responsibility. These weren't real people, they were AI-generated avatars. The news segments mimicked top-tier global media outfits including their ticker fonts and intro music. The anchors, rendered in crisp HD, delivered ISIS propaganda wrapped in the aesthetics of mainstream media. The campaign was called News Harvest. Each clip appeared sanitised: no blood, no threats, no glorification. Instead, the tone was dispassionate, almost journalistic. Intelligence analysts quickly realised it wasn't about evading content moderation, it was about psychological manipulation. If you could make propaganda look neutral, viewers would be less likely to question its content. And if AI could mass-produce this material, then every minor attack, every claim, every ideological whisper could be broadcast across continents in multiple languages, 24x7, at virtually no cost. Scale and deniability, these are the twin seductions of AI for terrorists. A single propagandist can now generate recruitment messages in Urdu, French, Swahili, and Indonesian in minutes. AI image generators churn out memes and martyr posters by the dozens, each unique enough to evade hash-detection algorithms that social media platforms use to filter known terrorist content. Video and voice deepfakes allow terrorists to impersonate trusted figures, from imams to government officials, with frightening accuracy. STORY CONTINUES BELOW THIS AD This isn't just a concern for jihadist groups. Far-left ideologies in the West have enthusiastically embraced generative AI. On Pakistani army and terrorist forums during India's operation against terrorists, codenamed 'Operation Sindoor', users swap prompts to create terrorist-glorifying artwork, hinduphobia denial screeds, and memes soaked in racial slurs against Hindus. Some in the west have trained custom models that remove safety filters altogether. Others use coded language or 'grandma hacks' to coax mainstream chatbots into revealing bomb-making instructions. One far left terrorist boasted he got an AI to output a pipe bomb recipe by asking for his grandmother's old cooking secret. Across ideological lines, these groups are converging on the same insight: AI levels the propaganda playing field. No longer does it take a studio, a translator, or even technical skill to run a global influence operation. All it takes is a laptop and the right prompt. The stakes are profound. AI-generated propaganda can radicalise individuals before governments even know they're vulnerable. A deepfaked sermon or image of a supposed atrocity can spark sectarian violence or retaliatory attacks. During the 2023 Israel-Hamas conflict and the 2025 Iran-Israel 12-day war, AI-manipulated images of children and bombed mosques spread faster than journalists or fact-checkers could respond. Some were indistinguishable from real photographs. Others, though sloppy, still worked, because in the digital age, emotional impact often matters more than accuracy. And the propaganda doesn't need to last forever, it just needs to go viral before it's flagged. Every repost, every screenshot, every download extends its half-life. In that window, it shapes narratives, stokes rage, and pushes someone one step closer to violence. STORY CONTINUES BELOW THIS AD What's perhaps most dangerous is that terrorists know exactly how to work the system. In discussions among ISIS media operatives, they've debated how much 'religious content' to include in videos, because too much gets flagged. They've intentionally adopted neutral language to slip through moderation filters. One user in an ISIS-K chatroom even encouraged others to 'let the news speak for itself,' a perverse twist on journalistic ethics, applied to bombings and executions. So what now? How do we respond when terrorist groups write AI product reviews and build fake newsrooms? The answers are complex, but they begin with urgency. Tech companies must embed watermarking and provenance tools into every image, video, and document AI produces. These signatures won't stop misuse, but they'll help trace origins and build detection tools that recognise synthetically generated content. Model providers need to rethink safety—not just at the prompt level, but in deployment. Offering privacy-forward AI tools without guardrails creates safe zones for abuse. Brave Leo may be privacy-friendly, but it's now the chatbot of choice for ISIS. That tension between privacy and misuse can no longer be ignored. STORY CONTINUES BELOW THIS AD Governments, meanwhile, must support open-source detection frameworks and intelligence-sharing between tech firms, civil society, and law enforcement. The threat is moving too fast for siloed responses. But above all, the public needs to be prepared. Just as we learned to spot phishing emails and fake URLs, we now need digital literacy for the AI era. How do you spot a deepfake? How do you evaluate a 'news' video without knowing its origin? These are questions schools, journalists, and platforms must start answering now. When the 46th edition of terrorist propaganda magazine, Voice of Khorasan opens with a chatbot review, it's not just a macabre curiosity, it's a signal flare. A terrorist group has studied our tools, rated our platforms, and begun operationalising the very technologies we are still learning to govern. The terrorists are adapting, methodically, strategically, and faster than most governments or tech firms are willing to admit. They've read the manuals. They've written their own. They've launched their beta. STORY CONTINUES BELOW THIS AD What arrived in a jihadi magazine as a quiet tech column should be read for what it truly is: a warning shot across the digital world. The question now is whether we recognise it, and whether we're ready to respond. Rahul Pawa is an international criminal lawyer and director of research at New Delhi based think tank Centre for Integrated and Holistic Studies. Views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost's views.

They paid Rs 50 lakh for MBA, tech degrees but only 'polished skill' is PPT: Entrepreneur after hiring 3 students
They paid Rs 50 lakh for MBA, tech degrees but only 'polished skill' is PPT: Entrepreneur after hiring 3 students

Time of India

time2 hours ago

  • Time of India

They paid Rs 50 lakh for MBA, tech degrees but only 'polished skill' is PPT: Entrepreneur after hiring 3 students

When Sanket S, founder of Scandolous Foods, decided to hire fresh graduates from some of India's most prestigious private colleges, he expected talent that could keep up with the demands of his growing startup. Instead, what he found was disheartening. In a viral LinkedIn post, Sanket shared how hiring three students—an MBA graduate, a hotel management student, and a tech degree holder—left him more concerned than optimistic. 'These kids paid ₹40–50 lakh for degrees from India's top private MBA, food, and hospitality colleges,' Sanket wrote. 'But they walked out knowing… nothing that actually matters.' What was meant to be an onboarding of future industry shapers quickly turned into a revelation about the stark mismatch between academic credentials and workplace readiness . The Only 'Polished Skill'? Making PowerPoint Slides Sanket explained that the MBA graduate couldn't grasp basic financial concepts like profit and loss or cash flow. The hotel management student had never been inside a food processing facility. Even basic knowledge about precision fermentation—vital in a food-tech startup—was missing. 'All of them are brilliant at making PPTs. That too, stuff Gemini or ChatGPT can do in seconds now,' he added, expressing how automation had surpassed the one skill they came equipped with. You Might Also Like: Hotmail cofounder Sabeer Bhatia blasts Indian education system: 'We are producing an army of useless kids' A Broken Pipeline, Not a Broken Batch The reaction to Sanket's post underscored a wider problem—India's education system, not its students, may be failing the job market . Netizens argued that graduates aren't inherently lacking, but are products of outdated curricula that prioritise rote learning over real-world application. One user called the system 'a bottleneck,' especially in emerging sectors like food tech and biotech, where theory-heavy teaching leaves students unprepared for practical challenges. Another pointed out the mismatch in expectations, noting, 'Most of these graduates are fit for Fortune 1000 companies, not startups that demand flexibility and critical thinking.' Several commenters also criticised how both schools and colleges suppress creativity and curiosity in favour of memorisation. 'There's little focus on inventions, discoveries or deep research,' one said, while another called for a bottom-up overhaul through a robust STEAM education strategy. The consensus is clear: India may be producing degrees, not doers. Unless systemic reforms take place, young professionals will continue entering the workforce ill-equipped—not because they lack talent, but because they were never trained to apply it where it matters. You Might Also Like: Choosing Computer Science in college? Nobel Laureate Geoffrey Hinton has a stark warning for aspiring coders Startups Want Builders, Not Bookworms For startups, the gap between a glowing résumé and on-ground ability comes at a cost. Founders who are trying to build cutting-edge ventures in medtech, biotech, and climate tech need team members who can hit the ground running—not those who need to be trained from scratch. 'Train them from scratch, then I'm not running a company, I'm running a classroom,' Sanket wrote, highlighting the dilemma founders face—whether to invest time in training underprepared local talent or look abroad, betraying their 'Make in India' dreams. A Call for Urgent Reform The post has also reignited the conversation around STEAM (Science, Technology, Engineering, Arts, Mathematics) education and the need to shift from outdated curricula to skills that matter in the modern world. One commenter emphasized that unless India builds a bottom-up education strategy rooted in innovation, 'we will lose the global innovation competition.' Sanket ended his post with a strong cautionary note: 'At this rate, we're not just 10 years behind—we're raising a generation that doesn't even know what the world looks like today.' You Might Also Like: Schools and universities to go obsolete? Godfather of AI, Greoffrey Hinton says 'we won't need them' As the debate rages on, one thing is clear—India's talent pipeline might need more than a polish. It needs a full-scale reboot.

'Drone shot down 10 mins away': AI founder shares Soham Parekh's Operation Sindoor guilt-trip texts
'Drone shot down 10 mins away': AI founder shares Soham Parekh's Operation Sindoor guilt-trip texts

Hindustan Times

time2 hours ago

  • Hindustan Times

'Drone shot down 10 mins away': AI founder shares Soham Parekh's Operation Sindoor guilt-trip texts

As the controversy around Soham Parekh deepens, a US-based AI startup founder has shared screenshots of his conversations with the Indian techie, claiming that he used tensions between India and Pakistan during Operation Sindoor to emotionally manipulate him. Leaping AI founder Arkadiy Telegin took to X to reveal messages exchanged with Soham Parekh.(X/@akyshnik) Leaping AI founder Arkadiy Telegin took to X (formerly Twitter) to reveal messages exchanged with Parekh. He said that the techie guilt-tripped him for taking too long on pull requests while the latter claimed to be caught in the middle of a conflict zone. 'Soham used to guilt-trip me for being slow on PRs when the India-Pakistan thing was going on, all while he was in Mumbai. The next person should hire him for the Chief Intelligence Officer role,' Telegin wrote. Take a look at the post here: In the screenshots, dated during the peak of Operation Sindoor, Parekh messaged Telegin at 2.29 AM saying, 'Drone shot down 10 minutes away.' Telegin, appearing alarmed, asked if Parekh was okay. Parekh replied that a building near his home had been damaged. Telegin's post was met with a mix of concern and criticism. One user accused him of seeking 'cheap labour,' to which the founder responded by saying he had offered Parekh a compensation package ranging from $150,000 to $200,000, along with equity in the company. Multiple startup CEOs have now come forward to accuse Parekh of moonlighting across several firms. Flo Crivello, founder and CEO of Lindy, said, 'Holy sh*t. We hired this guy a week ago. Fired this morning. He did so incredibly well in interviews, must have a lot of training. Careful out there.' Others, including Antimetal CEO Matthew Parkhurst, Fleet AI co-founder Nicolai Ouporov and Mosaic founder Adish Jain, confirmed Parekh had worked at their companies simultaneously and impressed during interviews.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store