
Why Mark Zuckerberg Spent Rs 14 Billion To Get Alexander Wang To Meta
Meta relied on open-source to attract developers, but now seeks a visionary leader to shape its AI future—prompting the $14B bet on Alexander Wang to lead the charge
Mark Zuckerberg is reportedly under pressure as Meta struggles to keep pace in the rapidly advancing world of artificial intelligence. In a bold move to change course, Meta has made a massive investment aimed at strengthening its AI capabilities.
The tech giant has reportedly poured $14 billion into Scale AI, a leading data-labelling startup, effectively doubling the company's valuation to $29 billion. The deal is said to give Meta a significant 49% stake in Scale AI—along with a strategic edge in the AI race.
Despite the substantial investment, Scale AI remains an independent entity with no changes to its board. Nevertheless, Meta now wields considerable influence over the company's operations.
Alexander Wang, Scale AI's founder and CEO, plays a pivotal role in this arrangement. Although Wang retains his position on Scale's board, his partnership with Meta means the tech giant effectively steers Scale AI's decisions.
The deal was substantial enough to create the impression that Meta had acquired Scale AI entirely. In reality, a significant portion of the deal benefited Scale AI's employees, who received substantial payouts for their shares while retaining some equity. This arrangement, reportedly Alexander Wang's idea, ensured that his team could profit from the company's growth.
Why Is Meta Interested In Scale AI's Business?
Meta's interest in Scale AI is particularly noteworthy, given that the latter's primary business involves data labelling for machine learning, a service with minimal technological innovation. Scale AI caters to clients such as Toyota, General Motors, Etsy, and various governments, providing data preparation services for those keen on adopting AI but lacking the in-house capability to develop it.
This investment in Scale AI does not align with Meta's core business interests, as Meta is not looking to become a B2B data service company. The primary objective of the deal was to bring Alexander Wang into Meta's fold, a strategy similar to Google's investment in Character AI and Microsoft's acquisition of talent through Inflection AI.
The Race To Build The Best LLM
In today's AI-driven world, the company that builds the best Large Language Model (LLM) will dominate. It's a battle for market leadership, where knowing how to build models isn't enough. Without the right data, massive computing power, and the ability to scale, survival is unlikely.
Meta is currently trailing in the AI race. OpenAI has dominated the consumer space with ChatGPT, while Google and Anthropic hold strong positions in the developer ecosystem. Although Meta has released models like Llama 2, it has yet to secure the top spot in the LLM race.
Meta's core strategy so far has focused on open-sourcing its models, which helped attract developers and researchers to its ecosystem. However, the company now believes that open-source alone isn't enough. What it needs is a visionary leader to steer its AI future—and that's where Wang comes in. He is seen as the ideal choice to take Meta's AI ambitions to the next level.
First Published:
July 01, 2025, 18:55 IST
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

United News of India
an hour ago
- United News of India
Engineering Progress—Prasad Boraskar's Enduring Legacy in Embedded Systems and Firmware Innovation
In the intricate world of embedded systems and firmware engineering, the bar for technical achievement is set exceptionally high. This domain, which serves as the foundation for advancements in augmented reality, smart security devices, and connected consumer electronics, demands not only advanced academic training but also a rare capacity for innovation and leadership. The professionals who excel here are those whose expertise spans system architecture, design verification, and the automation of complex hardware-software interactions—a profile that is exemplified by Prasad Boraskar. With over 17 years of experience across some of the world's most influential technology companies, Boraskar has built a reputation as a senior embedded systems and firmware engineer whose work consistently drives industry progress. His academic credentials—a Master's degree in Electrical Engineering from the University of Southern California and a Bachelor's degree in Electronics and Telecommunications from Mumbai University—provide a solid foundation for his technical achievements. Throughout his career, Boraskar has focused on the design and verification of firmware for AR devices, security cameras, and embedded platforms, demonstrating a unique ability to bridge theoretical research with practical application. At Meta, Boraskar led verification efforts for AR wearables, developing tools such as the Sensor Synchronization Fixture and the Streaming Data Analyzer. These innovations have reduced manual testing, accelerated product timelines, and set new benchmarks for efficiency and accuracy in the verification of sensor-driven systems. By standardizing timing measurements between sensors and devices, Boraskar's tools have improved data accuracy and enabled teams to deliver higher quality products in less time. The widespread adoption of these methodologies by leading technology firms underscores their originality and significance within the field. Boraskar's impact extends to his work at Apple, where he built an end-to-end firmware update system for Systems on Chip (SoCs), enhancing update reliability and device security for millions of users. His contributions to wireless video transmission and video quality optimization at Netgear have helped improve home security systems that are now widely used in the United States. These achievements reflect a consistent pattern of innovation, where Boraskar's expertise supports robust product development and drives the evolution of the tech industry. The significance of Boraskar's contributions is amplified by the context in which they occur. As embedded systems become increasingly central to the functionality of modern devices, the need for secure, scalable, and efficient solutions grows ever more critical. Boraskar's work in system architecture, firmware design verification, and automation addresses these needs directly, ensuring that products meet the highest standards of reliability and performance. His ability to lead cross-functional teams, manage complex projects, and deliver solutions that align with strategic business objectives further distinguishes him as a leader in his field.

Hindustan Times
an hour ago
- Hindustan Times
Google techie welcomes younger brother to IT giant, internet says ‘do bhai, dono tabahi'
A software engineer at Google has revealed that his younger brother will soon be joining the tech behemoth too — marking a milestone moment for the family, given how few applicants manage to land a job at the company. A Bengaluru-based techie welcomed his younger brother to Google (Reuters/Representational Image) When Priyam Agarwal announced his job switch on the social media platform X (formerly Twitter), it elicited a proud reaction from his elder brother, Priyansh Agarwal. What the Agarwal brothers posted Priyam shared a screenshot of the 'Onboarding' portal at Google on July 5, which informed him that he had nine days left until he joined the search giant as a software engineer. 'Less than 10 days before I start a new journey. Super excited and a little nervous,' wrote the Delhi-based techie. His brother Priyansh, who is already working at Google, reposted his post with a proud message. 'Younger brother coming to Google as well. Super proud of him,' wrote Bengaluru-based Priyansh Agarwal. Internet celebrates The post was flooded with congratulatory messages. Many people also shared their surprise at two brothers converting jobs at a company with a famously low acceptance rate. 'Do bhai dono tabhai (Two brothers, both awesome),' wrote several X users in the comments section. 'Congratulations to you both,' read one comment. 'Wow , both the brothers working at Google. Congratulations sir, Google is my dream company,' another person said. Google acceptance rate Google does not publish data on how many applicants it accepts every year. However, industry estimates suggest that Google's acceptance rate sits between 0.2% and 0.5% – which is lower than the acceptance rate of Harvard. The company has also carried out several rounds of layoffs since 2023 in a bid to streamline operations and reduce costs. According to an AP report, Google has been periodically reducing its headcount since 2023 as the industry began to backtrack from the hiring spree that was triggered during pandemic lockdowns that spurred feverish demand for digital services. Google began its post-pandemic retrenchment by laying off 12,000 workers in early 2023 and since then as been trimming some divisions to help bolster its profits while ramping up its spending on artificial intelligence — a technology driving an upheaval that is starting to transform its search engine into a more conversational answer engine. (With inputs from AP)
&w=3840&q=100)

Business Standard
2 hours ago
- Business Standard
AI may now match humans in spotting emotion, sarcasm in online chats
When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning – an underlying subtext. We also often hope that this meaning will come through to the reader. But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level. These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are – and how quickly conversational AI is improving – it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models – the technology behind AI chatbots such as ChatGPT – showed that some are better than others. Finally, a study showed that LLMs can guess the emotional 'valence' of words – the inherent positive or negative 'feeling' associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 – a relatively recent version of ChatGPT – can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm – thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 × 7B. We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell – although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there – hence, using human raters doesn't help much with sarcasm detection. Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast – and may soon be valuable teammates rather than mere tools. Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways – perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided – will the model's underlying judgements and ratings remain consistent? Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.