logo
In pursuit of godlike technology, Mark Zuckerberg amps up the AI race

In pursuit of godlike technology, Mark Zuckerberg amps up the AI race

Indian Express6 hours ago

In April, Mark Zuckerberg's lofty plans for the future of artificial intelligence crashed into reality.
Weeks earlier, the 41-year-old CEO of Meta had publicly boasted that his company's new AI model, which would power the latest chatbots and other cutting-edge experiments, would be a 'beast.' Internally, Zuckerberg told employees that he wanted it to rival the AI systems of competitors like OpenAI and be able to drive features such as voice-powered chatbots, people who spoke with him said.
But at Meta's AI conference that month, the new AI model did not perform as well as those of rivals. Features like voice interactions were not ready. Many developers, who attended the event with high expectations, left underwhelmed.
Zuckerberg knew Meta was falling behind in AI, people close to him said, which was unacceptable. He began strategizing in a WhatsApp group with top executives, including Chris Cox, Meta's head of product, and Andrew Bosworth, the chief technology officer, about what to do.
That kicked off a frenzy of activity that has reverberated across Silicon Valley. Zuckerberg demoted Meta's vice president in charge of generative AI. He then invested $14.3 billion in the startup Scale AI and hired Alexandr Wang, its 28-year-old founder. Meta approached other startups, including the AI search engine Perplexity, about deals.
And Zuckerberg and his colleagues have embarked on a hiring binge, including reaching out this month to more than 45 AI researchers at rival OpenAI alone. Some received formal offers, with at least one as high as $100 million, two people with knowledge of the matter said. At least four OpenAI researchers have accepted Meta's offers.
In another extraordinary move, executives in Meta's AI division discussed 'de-investing' in its AI model, Llama, two people familiar with the discussions said. Llama is an 'open source' model, with its underlying technology publicly shared for others to build on. They discussed embracing AI models from competitors like OpenAI and Anthropic, which have 'closed' code bases.
A Meta spokesperson said company officials 'remain fully committed to developing Llama and plan to have multiple additional releases this year alone.'
Zuckerberg has ramped up his activity to keep Meta competitive in a wildly ambitious race that has erupted within the broader AI contest. He is chasing a hypothetically godlike technology called 'superintelligence,' which is AI that would be more powerful than the human brain. Only a few Silicon Valley companies — OpenAI, Anthropic and Google — are considered to have the know-how to develop this, and Zuckerberg wants to ensure that Meta is included, people close to him said.
'He is like a lot of CEOs at big tech companies who are telling themselves that AI is going to be the biggest thing they have seen in their lifetime, and if they don't figure out how to become a big player in it, they are going to be left behind,' said Matt Murphy, a partner at the venture capital firm Menlo Ventures. He added, 'It is worth anything to prevent that.'
Leaders at other tech behemoths are also going to extremes to capture future innovation that they believe will be worth trillions of dollars. Google, Microsoft and Amazon have supersized their AI investments to keep up with one another. And the war for talent has exploded, vaulting AI specialists into the same compensation stratosphere as NBA stars.
Google's CEO, Sundar Pichai, and his top AI lieutenant, Demis Hassabis, as well as the chief executives of Microsoft and OpenAI, Satya Nadella and Sam Altman, are personally involved in recruiting researchers, two people with knowledge of the approaches said. Some tech companies are offering multimillion-dollar packages to AI technologists over email without a single interview.
'The market is setting a rate here for a level of talent which is really incredible, and kind of unprecedented in my 20-year career as a technology executive,' Meta's Bosworth said in a CNBC interview last week. He said Altman had made counteroffers to some of the people Meta had tried to hire.
OpenAI and Google declined to comment. Some details of Meta's efforts were previously reported by Bloomberg and The Information.
(The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.)
For years, Meta appeared to keep pace in the AI race. More than a decade ago, Zuckerberg hired Yann LeCun, who is considered a pioneer of modern AI. LeCun co-founded FAIR — or Fundamental AI Research — which became Meta's artificial intelligence research arm.
After OpenAI released its ChatGPT chatbot in 2022, Meta responded the next year by creating a generative AI team under one of its executives, Ahmad Al-Dahle, to spread the technology throughout the company's products. Meta also open-sourced its AI models, sharing the underlying computer code with others to entrench its technology and spread AI development.
But as OpenAI and Google built AI chatbots that could listen, look and talk, and rolled out AI systems designed to 'reason,' Meta struggled to do the same. One reason was that the company had less experience with a technique called 'reinforcement learning,' which others were using to build AI.
Late last year, the Chinese startup DeepSeek released AI models that were built upon Llama but were more advanced and required fewer resources to create. Meta's open-source strategy, once seen as a competitive advantage, appeared to have let others get a leg up on it.
Zuckerberg knew he needed to act. Around that time, outside AI researchers began receiving emails from him, asking if they would be interested in joining Meta, two people familiar with the outreach said.
In April, Meta released two new versions of Llama, asserting that the models performed as well as or better than comparable ones from OpenAI and Google. To prove its claim, Meta cited its own testing benchmarks. On Instagram, Zuckerberg championed the releases in a video selfie.
But some independent researchers quickly deduced that Meta's benchmarks were designed to make one of its models look more advanced than it was. They became incensed.
Zuckerberg later learned that his AI team had wanted the models to appear to perform well, even though they were not doing as well as hoped, people with knowledge of the matter said. Zuckerberg was not briefed on the customized tests and was upset, two people said.
His solution was to throw more bodies at the problem. Meta's AI division swelled to more than 1,000 people this year, up from a few hundred two years earlier.
The rapid growth led to infighting and management squabbles. And with Zuckerberg's round-the-clock, hard-charging management style — his attention on a project is often compared to the 'Eye of Sauron' internally, a reference to the 'Lord of the Rings' villain — some engineers burned out and left. Executives hunkered down to brainstorm next steps, including potentially ratcheting back investment in Llama.
In May, Zuckerberg sidelined Al-Dahle and ramped up recruitment of top AI researchers to lead a superintelligence lab. Armed with his checkbook, Zuckerberg sent more emails and text messages to prospective candidates, asking them to meet at Meta's headquarters in Menlo Park, California. Zuckerberg often takes recruitment meetings in an enclosed glass conference room, informally known as 'the aquarium.'
The outreach included talking to Perplexity about an acquisition, two people familiar with the talks said. No deal has materialized. Zuckerberg also spoke with Ilya Sutskever, OpenAI's former chief scientist and a renowned AI researcher, about potentially joining Meta, two people familiar with the approach said. Sutskever, who runs the startup Safe Superintelligence, declined the overture. He did not respond to a request for comment.
But Zuckerberg won over Wang of Scale, which works with data to train AI systems. They had met through friends and are also connected through Elliot Schrage, a former Meta executive who is an investor in Scale and adviser to Wang.
This month, Meta announced that it would take a minority stake in Scale and bring on Wang — who is not known for having deep technical expertise but has many contacts in AI circles — as well as several of his top executives to help run the superintelligence lab.
Meta is now in talks with Safe Superintelligence's CEO, Daniel Gross, and his investment partner Nat Friedman to join, a person with knowledge of the talks said. They did not respond to requests for comment.
Meta has its work cut out for it. Some AI researchers have said Zuckerberg has not clearly laid out his AI mission outside of trying to optimize digital advertising. Others said Meta was not the right place to build the next AI superpower.
Whether or not Zuckerberg succeeds, insiders said the playing field for technological talent had permanently changed.
'In Silicon Valley, you hear a lot of talk about the 10x engineer,' said Amjad Masad, the CEO of the AI startup Replit, using a term for extremely productive developers. 'Think of some of these AI researchers as 1,000x engineers. If you can add one person who can change the trajectory of your entire company, it's worth it.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Hunt for AI supremacy: Meta poaches more OpenAI researchers as talent war rages on
Hunt for AI supremacy: Meta poaches more OpenAI researchers as talent war rages on

First Post

time37 minutes ago

  • First Post

Hunt for AI supremacy: Meta poaches more OpenAI researchers as talent war rages on

Tensions between the two companies have simmered publicly for months. OpenAI's CEO Sam Altman previously claimed that Meta was offering '100 million signing bonuses' to tempt his employees away read more Mark Zuckerberg-led Meta's campaign to lure away some of OpenAI's top researchers appears to be continuing at pace, with several high-profile names reportedly switching sides in what has become one of Silicon Valley's most intense rivalries. After the Wall Street Journal reported that three researchers had left OpenAI for Meta earlier this week, TechCrunch confirmed that Trapit Bansal, an influential figure in OpenAI's research ranks, had also joined the company. Now, The Information has named four more hires from OpenAI: Shengjia Zhao, Jiahui Yu, Shuchao Bi and Hongyu Ren. STORY CONTINUES BELOW THIS AD The new appointments follow Meta's launch of its Llama 4 AI models in April. The models were met with a mixed response, with some reports suggesting they had not met the expectations of CEO Mark Zuckerberg. The company was also criticised over how it used an earlier version of Llama in a benchmark widely cited in AI performance comparisons. OpenAI, Meta tensions continue Tensions between the two companies have simmered publicly for months. OpenAI's CEO Sam Altman previously claimed that Meta was offering '100 million signing bonuses' to tempt his employees away, though he added that 'so far, none of our best people' had accepted. Meta's chief technology officer Andrew Bosworth later addressed the remarks in an internal note, telling staff that while such figures may have been discussed at senior levels, 'the actual terms of the offer' were more complex. The race to secure top-tier talent has become increasingly fierce, as tech giants seek an edge in developing powerful foundation models and generative AI systems. Meta's latest recruitment wave suggests that, while some key OpenAI figures have resisted its advances, the company is steadily building its own cadre of AI researchers in hopes of catching up.

Book Box: How to cope with AI anxiety
Book Box: How to cope with AI anxiety

Hindustan Times

timean hour ago

  • Hindustan Times

Book Box: How to cope with AI anxiety

Dear Reader, Empire of AI ticks all the boxes. These days, I see AI writing everywhere—on LinkedIn, in text messages from colleagues, and even in substack newsletters. There's something about these polished pieces of prose, glib and formulaic, with their idiosyncratic sentence structures and excessive dashes, that end up depressing me. Many of my writer friends won't touch AI. 'We can write just fine without it,' they say. But I can't stay away. I face my AI anxiety by finding out what this new beast is. As a teacher of management, and as someone who has pivoted careers three times already, I feel compelled to keep up with the times. AI making all writers redundant I sign up for 'Prompt Engineering 101 for Journalists' conducted by the non-profit Knight Centre. It teaches me how to prompt AI to 'red-team' my writing—to critique flaws rather than default to dishing out praise. And to watch out for AI 'hallucinations' like made-up names of books and fake quotations falsely attributed to real people. I stay conflicted: is it okay to use large language models that ride on the backs of writers and artists, that have learned by scraping creative works with no regard for privacy or copyright? And what about the environmental toll—the depredations on water and energy that the data centres inflict, especially in developing countries? I look for my answers in books about AI. Supremacy: AI, ChatGPT and the Race That Will Change the World is a 2024 book by Parmy Olson that won the Financial Times Business Book of the Year. It takes me close to AI stars like Sam Altman of OpenAI and Dennis Hassabis of DeepMind, as well as to the dangers of decision-making being left to a tiny elite. But it leaves me wanting more. A friend recommends The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, a memoir by Chinese-American scientist Dr. Fei-Fei Li. It begins with an exciting chapter, with Fei-Fei traveling from her West Coast Google office to Washington, D.C., to testify to a US Senate committee on the direction of AI research. For many pages I am enraptured, reading Fei-Fei's family history, how she helped her parents run their dry cleaning store while studying at Princeton and then working at Stanford. Li has been a pioneer in AI image recognition with her ImageNet project, and this makes for fascinating reading. The book veers between scientific excitement, and apprehension at where AI research is going, and confirms my unease over the economic and existential implications of this new technology. Then I discover Empire of AI by Karen Hao. From the very first page, I am highlighting lines, drawn in by Hao's historical analysis of AI research over the years, everything from the 'AI winter' to the dispute between two schools of AI research—the symbolists and the connectionists. Empire of AI ticks all the boxes. It is rich in history and human detail, demystifying core concepts like deep learning and neural networks. Hao gives us the stars like Geoffrey Hinton, Ilya Sutskever, Greg Brockman—and also the workers, the data labelers and content moderators like Mophat Okinyi, Oskarina Fuentes Anaya, and shows the havoc that AI jobs have brought to their lives, as they are forced to deal with explicit sexual content and violent images and to perform AI training tasks for a pittance. For me, the most moving part is the story of Sam Altman's sister Annie Altman, who turned to sex work, having suffered huge health challenges and trapped in severe financial duress, against Sam's lifestyle featuring multimillion-dollar homes and luxury cars. 'Annie's story also complicates the grand narrative that Sam and other OpenAI executives have painted of AI ushering in a world of abundance. Altman has said that he expects AI to end poverty... And yet, against the reality of the lives of the workers in Kenya, activists in Chile, and Altman's own sister's experience bearing the brunt of all of these problems, those dreams ring hollow', says Hao. I put aside Empire of AI to go back to my day. I know it's ironic and it feels very meta, but after writing this, I ask Deepseek to design a brief depicting a writer dealing with the good and bad sides of AI, and then I use that output to ask Gemini to design the illustrations for me. AI, the perfect productivity tool ? That evening as I walk down towards the market to buy a AI-recommended geyser, I find myself grateful for Karen Hao's book. Because if AI's future is being written by the Altmans and the Musks of the world, excluding large sections of the world, there are things we can do to participate. Reading books like Hao's pushes us to pay attention—to the workers behind the algorithms, to the biases in the data, to the futures we're building one query at a time. Books like these arm us to fight back - to push for policy changes, demand transparency in training data, and support ethical AI movements. So yes, I will use AI. But I'll also keep reading and buying subscriptions to real writers and real news outlets, because the best defence against a dystopian future is to dream of a better one, and then to fight for it. What about you, dear Reader? Do you find AI more anxiety-inducing or enabling? Or a complex mix of both? And can you suggest any other such books on AI that we can add to this vital reading list? (Sonya Dutta Choudhury is a Mumbai-based journalist and the founder of Sonya's Book Box, a bespoke book service. Each week, she brings you specially curated books to give you an immersive understanding of people and places. If you have any reading recommendations or reading dilemmas, write to her at sonyasbookbox@

The ‘Big' reason why you must carefully read Facebook and Instagram's terms and conditions
The ‘Big' reason why you must carefully read Facebook and Instagram's terms and conditions

Time of India

timean hour ago

  • Time of India

The ‘Big' reason why you must carefully read Facebook and Instagram's terms and conditions

After years of training its generative AI models on billions of public images from Facebook and Instagram , Meta is reportedly seeking access to billions of photos users haven't publicly uploaded, sparking fresh privacy debates. While the social media giant explicitly states it is not currently training its AI models on these private photos, the company has declined to clarify whether it might do so in the future or what rights it will hold over these images, a report has said. The new initiative, first reported by TechCrunch on Friday (June 27) sees Facebook users encountering pop-up messages when attempting to post to Stories. These prompts ask users to opt into "cloud processing," which would allow Facebook to "select media from your camera roll and upload it to our cloud on a regular basis." The stated purpose is to generate "ideas like collages, recaps, AI restyling or themes like birthdays or graduations." The report notes that by agreeing to this feature, users also consent to Meta's AI terms, which permit the analysis of "media and facial features" from these unpublished photos, alongside metadata like creation dates and the presence of other people or objects. Users also grant Meta the right to "retain and use" this personal information. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Free P2,000 GCash eGift UnionBank Credit Card Apply Now Undo Meta used public, not private, data train its generative AI models According to The Verge, Meta recently acknowledged that it used data from all public content published on Facebook and Instagram since 2007 to train its generative AI models. Although the company stated it only used public posts from adult users over 18, it has remained vague about the precise definition of 'public' and what constituted an 'adult user' in 2007. Meta public affairs manager , Ryan Daniels, has reiterated to the publication that this new 'cloud processing' feature is not currently used for training its AI models, a, told The Verge, "[The story by the publication] implies we are currently training our AI models with these photos, which we aren't. This test doesn't use people's photos to improve or train our AI models," Maria Cubeta, a Meta comms manager, was quoted as saying. Cubeta also described the feature as 'very early,' innocuous, and entirely opt-in, stating, "Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test." Furthermore, while Meta also said that opting in grants permission to retrieve only 30 days' worth of camera roll data at a time, Meta's own terms suggest some data retention may be longer. 'Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,' Meta's says. Google Pixel 9 Pro Fold After 1 Year: Is It STILL My Daily Driver? (Long-Term Review) AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store