
Want to work for Elon Musk's xAI? It is hiring engineers to work on Grok app amid fake news scandal
Musk rebukes Grok for claims he 'took' Stephen Miller's wife
Live Events
What is Grok?
(You can now subscribe to our
(You can now subscribe to our Economic Times WhatsApp channel
Tech billionaire Elon Musk 's artificial intelligence company, xAI, is looking to hire engineers to help them build native Grok apps both macOS and Windows. Igor Babuschkin , co-founder and an engineer at xAI, re-shared a post by Sulaiman Khan Ghori that read, 'Hiring engineers at @xai to help build our Mac product. If you have a track record of writing high-performance Swift and AppKit applications, I would love to chat!'"Join us to work on the X/Grok macOS app,' Babuschkin wrote on X. "Hiring engineers at @xai to help build our Mac product. If you have a track record of writing high-performance Swift and AppKit applications, would love to chat!," Ghori's post read on X.The post was also shared by Tesla CEO Elon Musk, saying they were building apps for both macOS and Windows. 'We're building both macOS and Windows apps. Can't live in the browser forever!' he tweeted.Elon Musk rebuked his own artificial intelligence (AI) chatbot Grok on Sunday, after it incorrectly verified a false X post purporting to show the tech billionaire taking a swipe at White House deputy chief of staff Stephen Miller.The fabricated post, which has since been deleted, took advantage of the explosive rift between Musk and President Trump last week, the fallout of which has caught Miller and his wife Katie Miller in the crosshairs. Katie Miller, who served as an adviser and spokesperson for Musk at the Department of Government Efficiency (DOGE), was among those who left the White House alongside the Tesla CEO late last month.The X post on Sunday reportedly showed a fake screenshot in which Musk appeared to reply to Stephen Miller, saying, 'Just like I took your wife,' according to Grok's summary. When asked by an X user to verify the post, the AI chatbot responded that it 'likely existed and was deleted.' 'The screenshot's engagement metrics and context align with Musk's behavior, but its deletion means direct verification is unavailable,' Grok noted. 'While a fabricated screenshot is possible, the evidence leans toward the post being real but removed, consistent with Musk's pattern of deleting controversial posts.'Musk responded with some exasperation to his own chatbot, underscoring that the post was fake. Earlier in March, Musk extended a global call for backend engineers to bolster the development and reliability of Grok. On X, Babuschkin posted for 'outstanding backend engineers to help keep Grok performant and reliable.'Resharing the post, Musk highlighted xAI's unique approach to AI development, writing, 'xAI is the only major AI company with an absolute focus on truth, whether politically correct or not.'Musk has championed Grok as an alternative to ChatGPT, which was developed by OpenAI, a tech giant that Musk helped found in 2015.Three years later, Musk left the company's board of directors, publicly pointing to 'a potential future conflict [of interest]' given Tesla's AI developments. Musk has long been publicly bitter toward OpenAI CEO Sam Altman, particularly after the company made massive breakthroughs in generative AI with its large language models.Grok is an artificial intelligence (AI) assistant and chatbot created by xAI, a startup owned by Elon Musk, in 2023. Grok, like ChatGPT and other programs, can generate text and graphics while also engaging in conversations with users.Grok is able to generate text and images and engage in conversations with users, similar to ChatGPT and other tools. Unlike other chatbots, though, it can access information in real-time through the web and X (formerly Twitter), and is programmed to respond to edgy and provocative questions with witty and 'rebellious' answers.Headquartered in the San Francisco Bay Area in California, xAI started with a team of 12 people, including Elon Musk. On its website, xAI says, 'Our approach to rapid development and iteration allows us to innovate at breakneck speeds. We're not interested in speed for speed's sake—we're here to solve real problems

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India.com
5 hours ago
- India.com
This man is earning Rs 43227900000 daily, much richer than Mukesh Ambani, Adani, will he beat Musk as world's richest man? His name is...
(File) Tesla CEO Elon Musk is the richest person in the world with a net worth of $362 billion as of July 26, 2025, according to Bloomberg Billionaires Index. However, the tech mogul faces a serious challenger to his coveted throne from a close personal friend, Larry Ellison, the the co-founder of Oracle whose wealth has surged by whopping a $500 million (about Rs 4322.79 crore) per day over the past one year. How Larry Ellison could dethrone Elon Musk as world's richest man? According to Bloomberg Billionaires Index, Elon Musk's net worth has dipped to $362 billion, while that of Larry Ellison has surged to $296 billion, making him the second wealthiest individual on the planet. Ellison net worth has grown by a staggering $104 billion in last 12 months, including earning a $28.4 billion in a single day earlier this week. Musk's lead over Ellison has shrunk to just $66 billion, which may seem like a lot, but considering the rate at which the Oracle founder is adding to his wealth, he could potentially overtake the SpaceX boss within weeks, if not days, if his golden run continues. Notably, Ellison added another $2.89 billion to his wealth on Friday, taking his net worth to $296, just four billion short of the elusive $300 billion club. Who is Larry Ellison? Larry Ellison is the co-founder of US multinational tech giant Oracle Corporation. The 80-year-old owns 41 percent stake in the company, and his wealth has exponentially increased due to a recent surge in Oracle's stock prices. In June, Ellison's wealth surged by a whopping $41 billion in just two days, including $25 billion in a single day, as per Bloomberg. Larry Ellison co-founded Oracle in 1977 as a database software company, and over the years, the firm has transformed into a global cloud computing powerhouse. Ellison, 80, currently serves at the Chairman and CTO at Oracle. Interestingly, Larry Ellison is a close friend of Elon Musk, and a major investor in Tesla, having previously served on the company's board December 2018 to August 2022. 'I don't know how many people know this… but I am a close friend of Elon Musk and a big investor in Tesla,' Ellison revealed in a 2018 interview. The Oracle boss has also come to Musk's defense during several controversies involving the world's richest man.


Time of India
5 hours ago
- Time of India
Tesla gets multiple shareholder proposals related to investment in xAI
Tesla said on Friday it had received a number of shareholder proposals regarding the company's plan to invest in CEO Elon Musk's artificial intelligence startup xAI. Musk ruled out a merger between Tesla and xAI earlier in July, but said he planned to hold a shareholder vote on investment in the startup by the automaker. The proposals come amid significant funding activity for xAI this year. The startup completed a $5 billion debt raise alongside a separate $5 billion strategic equity investment, Morgan Stanley said last month. Musk has pursued an integration strategy across his business empire, with xAI acquiring social media platform X in March for $33 billion to enhance its chatbot training capabilities, while also integrating the Grok chatbot into Tesla vehicles. The potential investment discussion comes as Tesla faces various challenges, including Musk's political activities, which have impacted demand for its electric vehicles and triggered a 22% drop in its shares this year. "Shareholders are welcome to put forward any shareholder proposals they'd like," Musk said on Tesla's quarterly earnings call on Wednesday. Tesla, which will hold its annual shareholder meeting on November 6, said it would only include one proposal on each topic in its proxy statement, in accordance with the SEC rules. Earlier this month, the board set July 31 as the deadline for the submission of shareholder proposals to be included in the proxy statement.


Mint
6 hours ago
- Mint
The new chips designed to solve AI's energy problem
'I can't wrap my head around it," says Andrew Wee, who has been a Silicon Valley data-center and hardware guy for 30 years. The 'it" that has him so befuddled—irate, even—is the projected power demands of future AI supercomputers, the ones that are supposed to power humanity's great leap forward. Wee held senior roles at Apple and Meta, and is now head of hardware for cloud provider Cloudflare. He believes the current growth in energy required for AI—which the World Economic Forum estimates will be 50% a year through 2030—is unsustainable. 'We need to find technical solutions, policy solutions and other solutions that solve this collectively," he says. To that end, Wee's team at Cloudflare is testing a radical new kind of microchip, from a startup founded in 2023, called Positron, which has just announced a fresh round of $51.6 million in investment. These chips have the potential to be much more energy efficient than ones from industry leader Nvidia at the all-important task of inference, which is the process by which AI responses are generated from user prompts. While Nvidia chips will continue to be used to train AI for the foreseeable future, more efficient inference could collectively save companies tens of billions of dollars, and a commensurate amount of energy. There are at least a dozen chip startups all battling to sell cloud-computing providers the custom-built inference chips of the future. Then there are the well-funded, multiyear efforts by Google, Amazon and Microsoft to build inference-focused chips to power their own internal AI tools, and to sell to others through their cloud services. The intensity of these efforts, and the scale of the cumulative investment in them, show just how desperate every tech giant—along with many startups—is to provide AI to consumers and businesses without paying the 'Nvidia tax." That's Nvidia's approximately 60% gross margin, the price of buying the company's hardware. Nvidia is very aware of the growing importance of inference and concerns about AI's appetite for energy, says Dion Harris, a senior director at Nvidia who sells the company's biggest customers on the promise of its latest AI hardware. Nvidia's latest Blackwell systems are between 25 and 30 times as efficient at inference, per watt of energy pumped into them, as the previous generation, he adds. To accomplish their goals, makers of novel AI chips are using a strategy that has worked time and again: They are redesigning their chips, from the ground up, expressly for the new class of tasks that is suddenly so important in computing. In the past, that was graphics, and that's how Nvidia built its fortune. Only later did it become apparent graphics chips could be repurposed for AI, but arguably it's never been a perfect fit. Jonathan Ross is chief executive of chip startup Groq, and previously headed Google's AI chip development program. He says he founded Groq (no relation to Elon Musk's xAI chatbot) because he believed there was a fundamentally different way of designing chips—solely to run today's AI models. Groq claims its chips can deliver AI much faster than Nvidia's best chips, and for between one-third and one-sixth as much power as Nvidia's. This is due to their unique design, which has memory embedded in them, rather than being separate. While the specifics of how Groq's chips perform depends on any number of factors, the company's claim that it can deliver inference at a lower cost than is possible with Nvidia's systems is credible, says Jordan Nanos, an analyst at SemiAnalysis who spent a decade working for Hewlett Packard Enterprise. Positron is taking a different approach to delivering inference more quickly. The company, which has already delivered chips to customers including Cloudflare, has created a simplified chip with a narrower range of abilities, in order to perform those tasks more quickly. The company's latest funding round came from Valor Equity Partners, Atreides Management and DFJ Growth, and brings the total amount of investment in the company to $75 million. Positron's next-generation system will compete with Nvidia's next-generation system, known as Vera Rubin. Based on Nvidia's road map, Positron's chips will have two to three times better performance per dollar, and three to six times better performance per unit of electricity pumped into them, says Positron CEO Mitesh Agrawal. Competitors' claims about beating Nvidia at inference often don't reflect all of the things customers take into account when choosing hardware, says Harris. Flexibility matters, and what companies do with their AI chips can change as new models and use cases become popular. Nvidia's customers 'are not necessarily persuaded by the more niche applications of inference," he adds. Cloudflare's initial tests of Positron's chips were encouraging enough to convince Wee to put them into the company's data centers for more long-term tests, which are continuing. It's something that only one other chip startup's hardware has warranted, he says. 'If they do deliver the advertised metrics, we will open the spigot and allow them to deploy in much larger numbers globally," he adds. By commoditizing AI hardware, and allowing Nvidia's customers to switch to more-efficient systems, the forces of competition might bend the curve of future AI power demand, says Wee. 'There is so much FOMO right now, but eventually, I think reason will catch up with reality," he says. One truism of the history of computing is that whenever hardware engineers figure out how to do something faster or more efficiently, coders—and consumers—figure out how to use all of the new performance gains, and then some. Mark Lohmeyer is vice president of AI and computing infrastructure for Google Cloud, where he provides both Google's own custom AI chips, and Nvidia's, to Google and its cloud customers. He says that consumer and business adoption of new, more demanding AI models means that no matter how much more efficiently his team can deliver AI, there is no end in sight to growth in demand for it. Like nearly all other big AI providers, Google is making efforts to find radical new ways to produce energy to feed that AI—including both nuclear power and fusion. The bottom line: While new chips might help individual companies deliver AI more efficiently, the industry as a whole remains on track to consume ever more energy. As a recent report from Anthropic notes, that means energy production, not data centers and chips, could be the real bottleneck for future development of AI. Write to Christopher Mims at