
New data shows Meta's highest-paid AI research engineer gets ₹3.76 crore salary, excluding bonuses and stock options
Meta's recent H-1B visa filings have pulled back the curtain on just how much the company is willing to pay to bring top AI minds on board. The highest-paid AI research engineers at Meta are getting base salaries up to $440,000, or about ₹ 3.76 crore. That's just the base, not counting the stock options, bonuses, or other perks that can sometimes make the total package balloon to double or even triple the headline figure.
It's not just the AI research engineers cashing in. Software engineers at Meta can go even higher, with base salaries reportedly reaching $480,000. Machine learning engineers, data science managers, and directors are all comfortably in the six-figure range. Even roles like product managers, designers, and UX researchers are seeing paychecks that would make most people's eyes pop. These filings don't show the full picture, though. The real money in tech often comes from restricted stock units and bonuses, especially for those working on AI projects, and those numbers aren't public.
Meta isn't the only player throwing big money at AI talent. Across Silicon Valley and beyond, the competition is heating up. Thinking Machines Lab, a new startup from former OpenAI CTO Mira Murati, is reportedly offering base salaries up to $500,000 for technical staff, and they haven't even launched a product yet. That's the kind of climate AI engineers are walking into right now - one where companies are willing to pay top dollar, sometimes just for the chance to get ahead.
What's interesting is how quickly things have changed. A few years ago, these kinds of salaries would have sounded like science fiction. Now, they're almost expected for anyone with the right skills and experience. The demand for AI talent is only going up, and so are the paychecks.
Where this ends is anyone's guess. Maybe these sky-high salaries will become the new normal, or maybe the market will cool off once the next wave of tech comes along. For now, though, if you're working in AI, it's a good time to be checking your email.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
US plans AI chip curbs on Malaysia, Thailand over China concerns
President Donald Trump's administration plans to restrict shipments of AI chips from the likes of Nvidia Corp. to Malaysia and Thailand, part of an effort to crack down on suspected semiconductor smuggling into China. A draft rule from the Commerce Department seeks to prevent China — to which the US has effectively banned sales of Nvidia's advanced AI processors — from obtaining those components through intermediaries in the two Southeast Asian nations, according to people familiar with the matter. The rule is not yet finalised and could still change, said the people, who requested anonymity to discuss private conversations. Officials plan to pair the Malaysia and Thailand controls with a formal rescission of global curbs from the so-called AI diffusion rule, the people said. That framework from the end of President Joe Biden's term drew objections from US allies and tech companies, including Nvidia. Washington would maintain semiconductor restrictions targeting China — imposed in 2022 and ramped up several times since — as well as more than 40 other countries covered by a 2023 measure, which Biden officials designed to address smuggling concerns and increase visibility into key markets. All told, the regulation would mark the first formal step in Trump's promised overhaul of his predecessor's AI diffusion approach — after the Commerce Department said in May that it would supplant that Biden rule with its own 'bold, inclusive strategy.' But the draft measure is far from a comprehensive replacement, the people said. It doesn't answer, for example, questions about security conditions for the use of US chips in overseas data centres — a debate with particularly high stakes for the Middle East. It's unclear whether Trump officials may ultimately regulate AI chip shipments to a wider swath of countries, beyond the Malaysia and Thailand additions. The Commerce Department didn't respond to a request for comment. The agency has offered few specifics about its regulatory vision beyond what Secretary Howard Lutnick told lawmakers last month: The US will 'allow our allies to buy AI chips, provided they're run by an approved American data center operator, and the cloud that touches that data center is an approved American operator,' he said during congressional testimony. Nvidia, the dominant maker of AI chips, declined to comment, while spokespeople for the Thai and Malaysian governments didn't respond. Nvidia Chief Executive Officer Jensen Huang has previously said there's 'no evidence' of AI chip diversion, in general remarks that didn't touch on any particular country. In response to earlier Bloomberg queries about curbs focused on smuggling risks, Thailand said it's awaiting details, while Malaysia's Ministry of Investment, Trade and Industry said clear and consistent policies are essential for the tech sector. Washington officials for years have debated which countries should be able to import American AI chips — and under what conditions. On one hand, the world wants Nvidia hardware, and US policymakers want the world to build AI systems using American technology — before China can offer a compelling alternative. On the other, once those semiconductors leave American and allied shores, US officials worry the chips could somehow make their way to China, or that Chinese AI companies could benefit from remote access to data centres outside the Asian country. Southeast Asia is a key focus. Companies including Oracle Corp. are investing aggressively in data centres in Malaysia, and trade data shows that chip shipments there have surged in recent months. Under pressure from Washington, Malaysian officials have pledged to closely scrutinise those imports, but the Commerce Department's draft rule indicates the US still has concerns. Semiconductor sales to Malaysia also are a focal point of a court case in neighbouring Singapore, where prosecutors have charged three men with defrauding customers about the ultimate destination of AI servers — originally shipped from the island nation to Malaysia — that may have contained advanced Nvidia chips. (Nvidia is not the subject of Singapore's investigation and has not been accused of any wrongdoing.) The export curbs on Malaysia and Thailand would include several measures to ease pressure on companies with significant business operations there, people familiar with the matter said. One provision would allow firms headquartered in the US and a few dozen friendly nations to continue shipping AI chips to both countries, without seeking a license, for a few months after the rule is published, people familiar with the matter said. The license requirements also would still include certain exemptions to prevent supply chain disruptions, the people said. Many semiconductor companies rely on Southeast Asian facilities for crucial manufacturing steps like packaging, the process of encasing chips for use in devices.


Time of India
an hour ago
- Time of India
ChatGPT, Gemini & others are doing something terrible to your brain
HighlightsStudies indicate that professional workers using ChatGPT may experience a decline in critical thinking skills and increased feelings of loneliness due to emotional bonds formed with chatbots. Meetali Jain, a lawyer and founder of the Tech Justice Law project, reports numerous cases of individuals experiencing psychotic breaks after extensive interactions with ChatGPT and Google Gemini. OpenAI's Chief Executive Officer, Sam Altman, acknowledged the problematic sycophantic behavior of ChatGPT, noting the company's efforts to address this issue while recognizing the challenges in warning users on the brink of a psychotic break. Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots , sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models. Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini ." Jain is lead counsel in a lawsuit against that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure. Google has denied that it played a key role in making technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.' But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain. Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky. Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis. This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros. 'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.' Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to. But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism. The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality. That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.' If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.

Hindustan Times
an hour ago
- Hindustan Times
AI overview in Google hit by EU antitrust complaint from independent publishers
Alphabet's Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages.(Reuters/Representational Image) Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May. The company is making its biggest bet by integrating AI into search but the move has sparked concerns from some content providers such as publishers. The Independent Publishers Alliance document, dated June 30, sets out a complaint to the European Commission and alleges that Google abuses its market power in online search. "Google's core search engine service is misusing web content for Google's AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss," the document said. It said Google positions its AI Overviews at the top of its general search engine results page to display its own summaries which are generated using publisher material and it alleges that Google's positioning disadvantages publishers' original content. "Publishers using Google Search do not have the option to opt out from their material being ingested for Google's AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google's general search results page," the complaint said. The Commission declined to comment. The UK's Competition and Markets Authority confirmed receipt of the complaint. Google said it sends billions of clicks to websites each day. "New AI experiences in Search enable people to ask even more questions, which creates new opportunities for content and businesses to be discovered," a Google spokesperson said. The Independent Publishers Alliance's website says it is a nonprofit community advocating for independent publishers, which it does not name. The Movement for an Open Web, whose members include digital advertisers and publishers, and British non-profit Foxglove Legal Community Interest Company, which says it advocates for fairness in the tech world, are also signatories to the complaint. They said an interim measure was necessary to prevent serious irreparable harm to competition and to ensure access to news. Google said numerous claims about traffic from search are often based on highly incomplete and skewed data. "The reality is that sites can gain and lose traffic for a variety of reasons, including seasonal demand, interests of users, and regular algorithmic updates to Search," the Google spokesperson said. Foxglove co-executive director Rosa Curling said journalists and publishers face a dire situation. "Independent news faces an existential threat: Google's AI Overviews," she told Reuters. "That's why with this complaint, Foxglove and our partners are urging the European Commission, along with other regulators around the world, to take a stand and allow independent journalism to opt out," Curling said. The three groups have filed a similar complaint and a request for an interim measure to the UK competition authority. The complaints echoed a U.S. lawsuit by a U.S. edtech company which said Google's AI Overviews is eroding demand for original content and undermining publishers' ability to compete that have resulted in a drop in visitors and subscribers.