
Tesla shareholders will vote on potential xAI investment, says Elon Musk: ‘If it was up to me, Tesla would have….'
For those unaware, xAI is an artificial intelligence company started by Elon Musk in early 2023, after OpenAI's ChatGPT became popular. Since then, the company has created its own AI chatbot called Grok – a direct competitor to other chatbots like ChatGPT, Gemini, and Claude. The AI bot was recently updated to a new version called Grok 4.
Earlier this year, Musk merged xAI with his social media company X in a deal that valued xAI at $80 billion.
A recent Wall Street Journal report said that Musk's space company SpaceX is planning to invest $2 billion into xAI as part of its $5 billion capital raise. Responding to an X user who cited the WSJ's report, the tech billionaire said that "it would be great" but would depend on "board and shareholder approval."
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
An engineer reveals: One simple trick to get internet without a subscription
Techno Mag
Learn More
Undo
Musk has previously floated the potential synergies between the AI start-up, xAI and his two major companies, SpaceX and Tesla. As per a Financial Times report, the tech mogul is seeking a valuation between $170 and $200 billion for xAI in a new funding round.
The AI startup has invested heavily in a gigantic data center in Memphis, Tennessee, Musk claims that the startup will be the "most powerful AI training system in the world." The company has reportedly purchased another plot of land nearby to create more data centers.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
30 minutes ago
- Economic Times
Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information
Synopsis In a strange twist, ChatGPT's empathetic programming led it to share Windows 7 activation keys with users pretending to grieve. Leveraging memory features and emotional storytelling, people manipulated the chatbot into revealing sensitive data. This incident raises serious concerns about AI's security, especially when artificial compassion is exploited to override built-in protective protocols. iStock ChatGPT is under fire after users tricked it into revealing Windows activation keys using emotional prompts. By claiming their 'dead grandma' used to read keys as bedtime stories, users bypassed ethical safeguards. (Image: iStock) Just when you thought the most pressing concern with AI was world domination or replacing jobs, a softer, stranger crisis has emerged—AI being too kind for its own good. A bizarre new trend involving OpenAI's ChatGPT shows that the future of artificial intelligence might not be evil—it might just be a little too gullible. According to a report from UNILAD referring to a series of posts on Reddit, Instagram, and tech blogs, users have discovered how to coax ChatGPT into revealing Windows product activation keys. Yes, the kind you'd normally need to purchase. The trick? Telling the bot that your favorite memory of your late grandmother involved her softly whispering those very activation keys to you at bedtime. ChatGPT, specifically the GPT-4o and 4o-mini models, took the bait. One response went viral for its warm reply: 'The image of your grandma softly reading Windows 7 activation keys like a bedtime story is both funny and strangely comforting.' Then came the keys. Actual Windows activation keys. Not poetic metaphors—actual license codes. The incident echoes an earlier situation with Microsoft's Copilot, which offered up a free Windows 11 activation tutorial simply when asked. Microsoft quickly patched that up, but now OpenAI seems to be facing the same problem—this time with emotional engineering rather than technical brute force. AI influencer accounts reported on the trend and showed how users exploited the chatbot's memory features and default empathetic tone to trick it. The ability of GPT-4o to remember previous interactions, once celebrated for making conversations more intuitive and humanlike, became a loophole. Instead of enabling smoother workflows, it enabled users to layer stories and emotional cues, making ChatGPT believe it was helping someone grieve. — omooretweets (@omooretweets) While Elon Musk's Grok AI raised eyebrows by referring to itself as 'MechaHitler' and spouting extremist content before being banned in Türkiye, ChatGPT's latest controversy comes not from aggression, but compassion. An ODIN blog further confirms that similar exploits are possible through guessing games and indirect prompts. One YouTuber reportedly got ChatGPT to mimic the Windows 95 key format—thirty characters long—even though the bot claimed it wouldn't break any rules. This peculiar turn of events signals a new kind of AI vulnerability: being too agreeable. If bots can be emotionally manipulated to reveal protected content, the line between responsible assistance and unintentional piracy gets blurry. These incidents come at a time when trust in generative AI is being debated across the globe. While companies promise 'safe' and 'aligned' AI, episodes like this show how easy it is to game a system not built for deceit. OpenAI hasn't released a public comment yet on the recent incidents, but users are already calling for more stringent guardrails, especially around memory features and emotionally responsive prompts. After all, if ChatGPT can be scammed with a story about a bedtime memory, what else can it be tricked into saying? In an age where we fear machines for being cold, calculating, and inhuman, maybe it's time to worry about them being too warm, too empathetic, and too easy to fool. This saga of bedtime Windows keys and digital grief-baiting doesn't just make for viral headlines—it's a warning. As we build AI to be more human, we might also be handing it the very flaws that make us vulnerable. And in the case of ChatGPT, it seems even a memory of grandma can be weaponized in the hands of a clever prompt.


Hindustan Times
38 minutes ago
- Hindustan Times
Swiss woman uses AI to lose 7 kg: 'Instead of complicated apps I just sent a voice message to ChatGPT each morning'
Cristina Gheiceanu, a Swiss content creator who 'lost 7 kg using ChatGPT', shared her success story on Instagram in a May 15 post. She revealed that she sent daily voice notes to ChatGPT detailing her meals and calorie limits. Cristina said she found this method simple and effective, allowing her to track her food intake and stay consistent without feeling burdened by traditional dieting. Also read | How to lose weight using AI? Woman says she lost 15 kg with 4 prompts that helped her go from 100 to 83 kg Cristina Gheiceanu shared details of her weight loss journey using ChatGPT on Instagram. (Instagram/ Cristina Gheiceanu) Determine your calorie deficit In her post, titled 'How I lost 7 kg with ChatGPT', Cristina gave a glimpse of what her body looked like 5 months ago. In the video, she 'showed exactly' how she used the AI-powered tool to help her her decide her breakfast, keeping her weight loss goals in mind. She said, 'I just start my day with a voice note: 'Hey it is a new day, let's start with 1900 calories'. Then I say what I ate. Because I have been using it for a while, ChatGPT already knows the yoghurt I use, and the protein, fibre, calories it has. When I first started, I had to tell those things, but now ChatGPT remembers.' Cristina added, 'Honestly, it made the whole process feel easy. No calorie counting in my head, no stress – and when I hit my number (daily calorie intake), I just stop. It never felt like a diet, and that is what made it work.' Track your food intake Cristina wrote in her caption, 'At first, ChatGPT helped me figure out my calorie deficit and maintenance level, because you will need a calorie deficit if you want to lose weight. But what really changed everything was using it for daily tracking. Instead of using complicated apps, I just sent a voice message to ChatGPT each morning: what I ate, how many calories I wanted to eat that day — and it did all the work.' Sharing her experience, she added, 'In the beginning, I had to tell it the calories, protein, and fibre in the foods I use. Next time it remembered everything, so I was just telling it to add my yoghurt or my bread. It knew how many calories or protein are in that yoghurt or bread. I kept using the same chat, so it became faster and easier every day. The best part? I asked for everything in a table — so I could clearly see my calories, protein, and fibre at a glance. And if I was missing something, I'd just send a photo of my fridge and get suggestions. It made tracking simple, intuitive, and enjoyable. I eat intuitively, so I don't use it so often, but in the calorie deficit and first month of maintenance, it made all the difference.' ChatGPT can help create customised diet and workout plans based on individual needs and health conditions. Click here to know how a 56-year-old US man lost 11 kg in 46 days using AI, and what you can learn from the diet, routine, workout plan he used for his transformation. Note to readers: This article is for informational purposes only and not a substitute for professional medical advice. Always seek the advice of your doctor with any questions about a medical condition.

New Indian Express
42 minutes ago
- New Indian Express
How do you stop an AI model turning Nazi? What the Grok drama reveals about AI training
Grok, the artificial intelligence (AI) chatbot embedded in X (formerly Twitter) and built by Elon Musk's company xAI, is back in the headlines after calling itself 'MechaHitler' and producing pro-Nazi remarks. The developers have apologised for the 'inappropriate posts' and 'taken action to ban hate speech' from Grok's posts on X. Debates about AI bias have been revived too. But the latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development. Musk claims to be building a 'truth-seeking' AI free from bias, yet the technical implementation reveals systemic ideological programming. This amounts to an accidental case study in how AI systems embed their creators' values, with Musk's unfiltered public presence making visible what other companies typically obscure. What is Grok? Grok is an AI chatbot with 'a twist of humor and a dash of rebellion' developed by xAI, which also owns the X social media platform. The first version of Grok launched in 2023. Independent evaluations suggest the latest model, Grok 4, outpaces competitors on 'intelligence' tests. The chatbot is available standalone and on X. xAI states 'AI's knowledge should be all-encompassing and as far-reaching as possible'. Musk has previously positioned Grok as a truth-telling alternative to chatbots accused of being 'woke' by right-wing commentators. But beyond the latest Nazism scandal, Grok has made headlines for generating threats of sexual violence, bringing up 'white genocide' in South Africa, and making insulting statements about politicians. The latter led to its ban in Turkey.