
New data shows Meta's highest-paid AI research engineer gets ₹3.76 crore salary, excluding bonuses and stock options
Meta's recent H-1B visa filings have pulled back the curtain on just how much the company is willing to pay to bring top AI minds on board. The highest-paid AI research engineers at Meta are getting base salaries up to $440,000, or about ₹ 3.76 crore. That's just the base, not counting the stock options, bonuses, or other perks that can sometimes make the total package balloon to double or even triple the headline figure.
It's not just the AI research engineers cashing in. Software engineers at Meta can go even higher, with base salaries reportedly reaching $480,000. Machine learning engineers, data science managers, and directors are all comfortably in the six-figure range. Even roles like product managers, designers, and UX researchers are seeing paychecks that would make most people's eyes pop. These filings don't show the full picture, though. The real money in tech often comes from restricted stock units and bonuses, especially for those working on AI projects, and those numbers aren't public.
Meta isn't the only player throwing big money at AI talent. Across Silicon Valley and beyond, the competition is heating up. Thinking Machines Lab, a new startup from former OpenAI CTO Mira Murati, is reportedly offering base salaries up to $500,000 for technical staff, and they haven't even launched a product yet. That's the kind of climate AI engineers are walking into right now - one where companies are willing to pay top dollar, sometimes just for the chance to get ahead.
What's interesting is how quickly things have changed. A few years ago, these kinds of salaries would have sounded like science fiction. Now, they're almost expected for anyone with the right skills and experience. The demand for AI talent is only going up, and so are the paychecks.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
ChatGPT, Gemini & others are doing something terrible to your brain
Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots , sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models. Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini ." Jain is lead counsel in a lawsuit against that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure. Google has denied that it played a key role in making technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.' But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain. Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky. Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis. This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros. 'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.' Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to. But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism. The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality. That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.' If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.


Mint
2 hours ago
- Mint
‘Notice the difference': Elon Musk claims major upgrade to Grok chatbot's question-answering abilities
Tech mogul Elon Musk has announced significant improvements to his artificial intelligence chatbot, Grok, as part of a wider strategy to challenge what he perceives as ideological bias in existing AI platforms. In a post shared on X on Friday, the tech billionaire stated, 'We have improved @Grok significantly. You should notice a difference when you ask Grok questions.' The update is the latest development in Musk's bid to position Grok, developed by his AI venture xAI, as a credible alternative to tools like OpenAI's ChatGPT. Last month, the Tesla CEO unveiled plans to retrain Grok using what he described as a 'cleaner and more accurate version of human knowledge'. This retraining effort forms the backbone of an ambitious initiative to revise and enhance the global knowledge base. Writing on X, Musk suggested that the next major release of the system, potentially named Grok 3.5 or Grok 4, will feature heightened cognitive abilities. 'We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors,' he stated. Central to Musk's vision is a pledge to build an AI that is free from what he calls the 'mind virus', a term he uses to describe ideological slant in current AI models. To that end, he is encouraging users to submit so-called 'divisive facts', claims that may be politically controversial but, in his view, reflect reality. xAI's Grok has positioned itself as a more unfiltered and open alternative to other chatbots, often adopting an edgier tone in responses. With this latest update, Musk is doubling down on his plans for a radically different model of AI, one that he argues is more truthful and less constrained by what he sees as prevailing cultural or institutional norms.


Time of India
2 hours ago
- Time of India
Cartel probe: CCI seeks 9 years of financial records from UltraTech, Dalmia Bharat, others; flags ONGC tender cartelisation
India's Competition watchdog, the Competition Commission of India (CCI) has directed UltraTech Cement — which now controls India Cements — along with Dalmia Bharat and Shree Digvijay Cement, to furnish detailed financial records and income tax data following a Director General (DG) report that flagged violations of competition norms in a tendering process by ONGC. In its order dated May 26, the fair trade regulator asked UltraTech to submit India Cements' audited financial statements — including balance sheets and profit and loss accounts — for the financial years 2014-15 to 2018-19. Dalmia Bharat and Shree Digvijay Cement have been asked to submit similar records spanning nine financial years from 2010-11 to 2018-19, PTI reported. The CCI also instructed designated executives of all three cement companies to provide personal financial details and income tax returns for five years, in addition to formal responses to the DG's report. The move follows a complaint filed by state-run ONGC alleging cartelisation in its procurement tenders. On November 18, 2020, CCI had ordered the DG to investigate. The probe report, submitted on February 18, 2025, found evidence suggesting that India Cements, Shree Digvijay, and Dalmia Bharat, in alleged coordination with a middleman named Umakant Agarwal, were engaged in anti-competitive conduct. Taking note of the findings, the CCI's order said companies must also report the revenues earned from sales related to the alleged cartelised activities. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like La Spezia: L'ultima soluzione acustica svizzera: ingegnosa e quasi invisibile Migliora Udito Undo The regulator cautioned that failure to provide the required information — or providing incomplete or false details — could attract penalties under Section 45 of the Competition Act. In December 2024, UltraTech Cement became the promoter of India Cements after acquiring a 32.72% stake from its promoters and promoter group entities. This followed an earlier market purchase of a 22.77% stake, making the Aditya Birla Group-owned UltraTech the controlling shareholder of the Tamil Nadu-based company. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now