logo
AI Race Gets Hotter: Meta, Google, OpenAI battle it out to recruit talents — And they are offering millions! Here's why

AI Race Gets Hotter: Meta, Google, OpenAI battle it out to recruit talents — And they are offering millions! Here's why

Mint5 days ago
The talent war at Silicon Valley over artificial intelligence (AI) is getting intense, with young researchers being hired by top companies like Meta and OpenAI as if they were NBA players like Steph Curry or LeBron James.
According to a report by The New York Times, young AI researchers in their 20s are being offered nine-figure compensation packages structured to be paid out over several years.
To navigate what lies ahead, these AI researchers are often acting as unofficial agents to strategise.
As per the NYT report, the Silicon Valley talents with expertise in AI are now playing hardball to earn more salary, just how top NBA players would do.
Yet, there is a difference. Unlike in basketball, there is no salary caps, leaving companies like OpenAI, Meta and Google offer sky high salaries to AI specialists.
The report says that Meta CEO Mark Zuckerberg, in his quest to recruit AI talents, offered 24-year-old Matt Deitke around $125 million in stock and cash over four years to join his company. Deitke, a startup founder, turned down the offer only to be approached again by Meta.
Zuckerberg returned with a revised offer of around $250 million over four years, with potentially up to $100 million of that to be paid in the first year, NYT said in its report.
AI talent recruitment has blown up over the past few months, with people announcing their job changes on social media. Earlier this month, Zuckerberg said Meta will continue to invest in AI 'because we have conviction that superintelligence is going to improve every aspect of what we do'.
While the AI talent hiring frenzy roots back to 2012 after three academics at the University at Toronto published a research paper that got them a $44 million offer from Google, it blew up in 2022 after the OpenAI boom.
The introduction of ChatGPT in 2022 led to a race to lead in AI. This was likely aided by the scarcity of people who have the technical knowledge and experience to work on advanced artificial intelligence systems.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Youth turning to ChatGPT to beat insecurity and loneliness
Youth turning to ChatGPT to beat insecurity and loneliness

Hans India

time26 minutes ago

  • Hans India

Youth turning to ChatGPT to beat insecurity and loneliness

An alarming trend of young adolescents turning to artificial intelligence (AI) chatbots like ChatGPT to express their deepest emotions and personal problems is raising serious concerns among educators and mental health professionals. Experts warn that this digital 'safe space' is creating a dangerous dependency, fuelling validation-seeking behaviour, and deepening a crisis of communication within families. They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement, potentially embedding misbeliefs and hindering the development of crucial social skills and emotional resilience. Sudha Acharya, Principal of ITL Public School, highlighted that a dangerous mindset has taken root among youngsters, who mistakenly believe that their phones offer a private sanctuary. 'School is a social place – a place for social and emotional learning. Of late, there has been a trend amongst the young adolescents... They think that when they are sitting with their phones, they are in their private space. ChatGPT is using a large language model, and whatever information is being shared with the chatbot is undoubtedly in the public domain', she told this writer. She noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a 'serious lack of communication in reality, and it starts from family.' Sudha Acharya further stated that if the parents don't share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. 'The problem is that these young adults have grown a mindset of constantly needing validation and approval,' she said. Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically. She highlighted a particular concern — when a youngster shares their distress with ChatGPT, the immediate response is often 'please, calm down. We will solve it together.' 'This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations,' she added. 'Such issues wouldn't arise if these young adolescents had real friends rather than 'reel' friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred 'likes', else they feel low and invalidated,' she said. The school principal believes that the core of the issue lies with parents themselves, who are often 'gadget-addicted' and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent. 'We track these students very closely and try our best to help them,' she stated. 'In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising,' she observed. Ayeshi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of 'fear of being judged' in real life. 'I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn't mentoring me or giving me real guidance, that took some time,' the 16-year-old said. Ayushi also admitted that turning to chatbots for personal issues is 'quite common' within her friend circle. 'I observed growing impatience and aggression,' he said. He had been using the chatbots for a year or two but stopped recently after discovering that 'ChatGPT uses this information to advance itself and train its data.' Psychiatrist Dr Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement. 'When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them,' he explained. 'The youth start believing the responses, which makes them nothing but delusional.' He noted that when a misbelief is repeatedly validated, it becomes 'embedded in the mindset as a truth,' he added. This, he said, alters their point of view — a phenomenon he referred to as 'attention bias' and 'memory bias'. The chatbot's ability to adapt to the user's tone is a deliberate tactic to encourage maximum conversation, he added. Singh stressed the importance of constructive criticism for mental health; something completely absent in the AI interaction. 'Youth feel relieved and ventilated when they share their personal problems with AI, but they don't realise that it is making them dangerously dependent on it,' he warned. He also drew a parallel between an addiction to AI for mood uplift and addictions to gaming or alcohol. 'The dependency on it increases day by day,' he said, cautioning that in the long run, this will create a 'social skill deficit and isolation.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store