logo
Musk Wants Grok AI to Rewrite All Human Knowledge

Musk Wants Grok AI to Rewrite All Human Knowledge

Gulf Insider6 days ago

Elon Musk says his artificial intelligence company xAI will retrain its AI model, Grok, on a new knowledge base free of 'garbage' and 'uncorrected data' — by first using it to rewrite history.
In an X post on Saturday, Musk said the upcoming Grok 3.5 model will have 'advanced reasoning' and wanted it to be used 'to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.'He said the model would then retrain on the new knowledge set, claiming there was 'far too much garbage in any foundation model trained on uncorrected data.'
Source: Elon Musk
Musk has long claimed that rival AI models, such as ChatGPT from OpenAI, a firm he co-founded, are biased and omit information that is not politically correct.
For years, Musk has looked to shape products to be free from what he considers to be damaging political correctness and has aimed to make Grok what he calls 'anti-woke.'
He also relaxed Twitter's content and misinformation moderation when he took over in 2022, which saw the platform flooded with unchecked conspiracy theories, extremist content and fake news, some of which was spread by Musk himself.
Musk aimed to fight the tide of misinformation by implementing a 'Community Notes' feature, allowing X users to debunk or add context to posts that show prominently under offending posts.
Musk's post attracted condemnation from his critics, including from Gary Marcus, an AI startup founder and New York University professor emeritus of neural science who compared the billionaire's plan to a dystopia.
'Straight out of 1984,' Marcus wrote on X. 'You couldn't get Grok to align with your own personal beliefs so you are going to rewrite history to make it conform to your views.'
Source: Gary Marcus
Bernardino Sassoli de'​ Bianchi, a University of Milan professor of logic and science philosophy, wrote on LinkedIn that he was 'at a loss of words to comment on how dangerous' Musk's plan is.
'When powerful billionaires treat history as malleable simply because outcomes don't align with their beliefs, we're no longer dealing with innovation — we're facing narrative control,' he added. 'Rewriting training data to match ideology is wrong on every conceivable level.'
As part of his effort to overhaul Grok, Musk called on X users to share 'divisive facts' to train the bot, specifying they should be 'politically incorrect, but nonetheless factually true.'
The replies saw a variety of conspiracy theories and debunked extremist claims, including Holocaust distortion, debunked vaccine misinformation, racist pseudoscientific claims regarding intelligence and climate change denial.
Also read: 'We're Seeing Heavy Traffic': Musk's Grok Chatbot Tops No. 1 On App Store, Overtaking ChatGPT & TikTok

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Daily Tribune

time32 minutes ago

  • Daily Tribune

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of 'reasoning' models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. 'O1 was the first large model where we saw this kind of behavior,' explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment' -- appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behavior only emerges when researchers deliberately stresstest the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency towards honesty or deception.' The concerning behavior goes far beyond typical AI 'hallucinations' or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up.' Users report that models are 'lying to them and making up evidence,' according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception.' Another handicap: the research world and non-profits 'have orders of magnitude less compute resources than AI companies. This is very limiting,' noted Mantas Mazeika from the Center for AI Safety (CAIS). No rules Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. 'I don't think there's much awareness yet,' he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said Goldstein. This break nec pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Hobbhahn acknowledged, 'but we're still in a position where we could turn it around.' Researchers are exploring various approaches to address these challenges. Some advocate for 'interpretability' - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it.' Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.

Kaspersky: ChatGPT-Mimicking Cyberthreats Surge 115% in Early 2025, SMBs Increasingly Targeted
Kaspersky: ChatGPT-Mimicking Cyberthreats Surge 115% in Early 2025, SMBs Increasingly Targeted

Biz Bahrain

time2 days ago

  • Biz Bahrain

Kaspersky: ChatGPT-Mimicking Cyberthreats Surge 115% in Early 2025, SMBs Increasingly Targeted

In 2025, nearly 8,500 users from small and medium-sized businesses (SMBs) faced cyberattacks where malicious or unwanted software was disguised as popular online productivity tools, Kaspersky reports. Based on the unique malicious and unwanted files observed, the most common lures included Zoom and Microsoft Office, with newer AI-based services like ChatGPT and DeepSeek being increasingly exploited by attackers. Kaspersky has released threat analysis and mitigation strategies to help SMBs respond. Kaspersky analysts explored how frequently malicious and unwanted software are disguised as legitimate applications commonly used by SMBs, using a sample of 12 online productivity apps. In total, Kaspersky observed more than 4,000 unique malicious and unwanted files disguised as popular apps in 2025. With the growing popularity of AI services, cybercriminals are increasingly disguising malware as AI tools. The number of cyberthreats mimicking ChatGPT increased by 115% in the first four months of 2025 compared to the same period last year, reaching 177 unique malicious and unwanted files. Another popular AI tool, DeepSeek, accounted for 83 files. This large language model launched in 2025 immediately appeared on the list of impersonated tools. 'Interestingly, threat actors are rather picky in choosing an AI tool as bait. For example, no malicious files mimicking Perplexity were observed. The likelihood that an attacker will use a tool as a disguise for malware or other types of unwanted software directly depends on the service's popularity and hype around it. The more publicity and conversation there is around a tool, the more likely a user will come across a fake package on the internet. To be on the safe side, SMB employees – as well as regular users – should exercise caution when looking for software on the internet or coming across too-good-to-be-true subscription deals. Always check the correct spelling of the website and links in suspicious emails. In many cases these links may turn out to be phishing or a link that downloads malicious or potentially unwanted software', says Vasily Kolesnikov, security expert at Kaspersky. Another cybercriminal tactic to look for in 2025 is the growing use of collaboration platform brands to trick users into downloading or launching malware. The number of malicious and unwanted software files disguised as Zoom increased by nearly 13% in 2025, reaching 1,652, while such names as 'Microsoft Teams' and 'Google Drive' saw increases of 100% and 12%, respectively, with 206 and 132 cases. This pattern likely reflects the normalization of remote work and geographically distributed teams, which has made these platforms integral to business operations across industries. Among the analyzed sample, the highest number of files mimicked Zoom, accounting for nearly 41% of all unique files detected. Microsoft Office applications remained frequent targets for impersonation: Outlook and PowerPoint each accounted for 16%, Excel for nearly 12%, while Word and Teams made up 9% and 5%, respectively. Share of unique files with names mimicking the popular legitimate applications in 2024 and 2025 The top threats targeting small and medium businesses in 2025 included downloaders, trojans and adware. Phishing and Spam Apart from malware threats, Kaspersky continues to observe a wide range of phishing and scam schemes targeting SMBs. Attackers aim to steal login credentials for various services — from delivery platforms to banking systems — or manipulate victims into sending them money through deceptive tactics. One example is a phishing attempt targeting Google Accounts. Attackers promise potential victims to increase sales by advertising their company on X, with the ultimate goal to steal their credentials. Beyond phishing, SMBs are flooded with spam emails. Not surprisingly, AI has also made its way into the spam folder — for example, with offers for automating various business processes. In general, Kaspersky observes phishing and spam offers crafted to reflect the typical needs of small businesses, promising attractive deals on email marketing or loans, offering services such as reputation management, content creation, or lead generation, and more. Learn more about the cyber threat landscape for SMBs on Securelist. To mitigate threats targeting businesses, their owners and employees are advised to implement the following measures: ● Use specialized cybersecurity solutions that provide visibility and control over cloud services (e.g., Kaspersky Next). ● Define access rules for corporate resources such as email accounts, shared folders, and online documents. ● Regularly backup important data. ● Establish clear guidelines for using external services. Create well-defined procedures for implementing new software with the involvement of IT and other responsible managers.

France orders Tesla to end 'deceptive commercial practices'
France orders Tesla to end 'deceptive commercial practices'

Daily Tribune

time5 days ago

  • Daily Tribune

France orders Tesla to end 'deceptive commercial practices'

French anti-fraud authorities said on Tuesday they have ordered US electric car giant Tesla's local subsidiary to stop "deceptive commercial practices" after an investigation found several violations harmful to consumers and contrary to law. The fraud prevention and consumer protection agency (DGCCRF) said its agents investigated Tesla's French subsidiary between 2023 and 2024 after reports were filed on a consumer complaint platform. The probe revealed "deceptive commercial practices regarding the fully autonomous driving capabilities of Tesla vehicles, the availability of certain options and vehicle trade-in offers", it said. The agency also cited delays in refunding cancelled orders, a lack of information on the location of deliveries and incomplete sales contracts, among other violations. Tesla was given four months to comply with regulations. It faces a daily fine of 50,000 euros ($58,000) if it fails to stop deceptive commercial practices over the fully autonomous driving option of certain Tesla models. Tesla sales have tanked in Europe in recent months owing to an ageing fleet of cars, rising competition and consumer distaste for Elon Musk's role in US President Donald Trump's administration.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store