Meta invests $14.3B in AI firm Scale and recruits its CEO for ‘superintelligence' team
The deal announced Thursday reflects a push by Meta CEO Mark Zuckerberg to revive AI efforts at the parent company of Facebook and Instagram as it faces tough competition from rivals such as Google and OpenAI.
Meta announced what it called a 'strategic partnership and investment' with Scale late Thursday. Scale said the $14.3 billion investment puts its market value at over $29 billion.
Scale said it will remain an independent company but the agreement will 'substantially expand Scale and Meta's commercial relationship.' Meta will hold a 49% stake in the startup.
Wang, though leaving for Meta with a small group of other Scale employees, will remain on Scale's board of directors. Replacing him is a new interim Scale CEO Jason Droege, who was previously the company's chief strategy officer and had past executive roles at Uber Eats and Axon.
Zuckerberg's increasing focus on the abstract idea of 'superintelligence' — which rival companies call artificial general intelligence, or AGI — is the latest pivot for a tech leader who in 2021 went all-in on the idea of the metaverse, changing the company's name and investing billions into advancing virtual reality and related technology.
It won't be the first time since ChatGPT's 2022 debut sparked an AI arms race that a big tech company has gobbled up talent and products at innovative AI startups without formally acquiring them. Microsoft hired key staff from startup Inflection AI, including co-founder and CEO Mustafa Suleyman, who now runs Microsoft's AI division.
Google pulled in the leaders of AI chatbot company Character.AI, while Amazon made a deal with San Francisco-based Adept that sent its CEO and key employees to the e-commerce giant. Amazon also got a license to Adept's AI systems and datasets.
Wang was a 19-year-old student at the Massachusetts Institute of Technology when he and co-founder Lucy Guo started Scale in 2016.
They won influential backing that summer from the startup incubator Y Combinator, which was led at the time by Sam Altman, now the CEO of OpenAI. Wang dropped out of MIT, following a trajectory similar to that of Zuckerberg, who quit Harvard University to start Facebook more than a decade earlier.
Scale's pitch was to supply the human labor needed to improve AI systems, hiring workers to draw boxes around a pedestrian or a dog in a street photo so that self-driving cars could better predict what's in front of them. General Motors and Toyota have been among Scale's customers.
What Scale offered to AI developers was a more tailored version of Amazon's Mechanical Turk, which had long been a go-to service for matching freelance workers with temporary online jobs.
More recently, the growing commercialization of AI large language models — the technology behind OpenAI's ChatGPT, Google's Gemini and Meta's Llama — brought a new market for Scale's annotation teams. The company claims to service 'every leading large language model,' including from Anthropic, OpenAI, Meta and Microsoft, by helping to fine tune their training data and test their performance. It's not clear what the Meta deal will mean for Scale's other customers.
Wang has also sought to build close relationships with the U.S. government, winning military contracts to supply AI tools to the Pentagon and attending President Donald Trump's inauguration. The head of Trump's science and technology office, Michael Kratsios, was an executive at Scale for the four years between Trump's first and second terms. Meta has also begun providing AI services to the federal government.
Meta has taken a different approach to AI than many of its rivals, releasing its flagship Llama system for free as an open-source product that enables people to use and modify some of its key components. Meta says more than a billion people use its AI products each month, but it's also widely seen as lagging behind competitors such as OpenAI and Google in encouraging consumer use of large language models, also known as LLMs.
It hasn't yet released its purportedly most advanced model, Llama 4 Behemoth, despite previewing it in April as 'one of the smartest LLMs in the world and our most powerful yet.'
Meta's chief AI scientist Yann LeCun, who in 2019 was a winner of computer science's top prize for his pioneering AI work, has expressed skepticism about the tech industry's current focus on large language models.
'How do we build AI systems that understand the physical world, that have persistent memory, that can reason and can plan?' LeCun asked at a French tech conference last year.
These are all characteristics of intelligent behavior that large language models 'basically cannot do, or they can only do them in a very superficial, approximate way,' LeCun said.
Instead, he emphasized Meta's interest in 'tracing a path towards human-level AI systems, or perhaps even superhuman.' When he returned to France's annual VivaTech conference again on Wednesday, LeCun dodged a question about the pending Scale deal but said his AI research team's plan has 'always been to reach human intelligence and go beyond it.'
'It's just that now we have a clearer vision for how to accomplish this,' he said.
LeCun co-founded Meta's AI research division more than a decade ago with Rob Fergus, a fellow professor at New York University. Fergus later left for Google but returned to Meta last month after a 5-year absence to run the research lab, replacing longtime director Joelle Pineau.
Fergus wrote on LinkedIn last month that Meta's commitment to long-term AI research 'remains unwavering' and described the work as 'building human-level experiences that transform the way we interact with technology.'
O'Brien writes for the Associated Press.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


San Francisco Chronicle
31 minutes ago
- San Francisco Chronicle
Greenbrier: Fiscal Q3 Earnings Snapshot
LAKE OSWEGO, Ore. (AP) — LAKE OSWEGO, Ore. (AP) — Greenbrier Companies Inc. (GBX) on Tuesday reported earnings of $60.1 million in its fiscal third quarter. The Lake Oswego, Oregon-based company said it had profit of $1.86 per share. The maker of railroad freight car equipment posted revenue of $842.7 million in the period. Greenbrier expects full-year revenue in the range of $3.15 billion to $3.35 billion. _____


Axios
36 minutes ago
- Axios
Spain's Renfe departs from Texas rail project
A Spanish railway company has pulled out of the bullet train project that aims to connect Dallas and Houston. Why it matters: The departure is another setback for the project, which hasn't started construction amid eminent domain challenges, investor changes and federal funding cuts. Flashback: Texas Central Railway in 2018 named Renfe, with routes covering about 9,300 miles in Spain, as an early operator for Texas' future bullet train. Central Japan Railway was chosen to provide the technology. The latest: In April, the U.S. Department of Transportation ended a $64 million grant to Amtrak for the rail project, saying the project is "a waste of taxpayer funds." A Fort Worth-based company joined the project as a lead private investor. But Renfe has liquidated its American subsidiary that was part of the Texas train project, writing off 4.5 million euros in losses, per the Spanish newspaper El Economista. The intrigue: The 240-mile route would get travelers from Dallas to Houston within 90 minutes, per Texas Central. The project is estimated to generate billions of dollars in revenue. Between the lines: Renfe's closure of its American subsidiary indicates the company isn't expecting any returns from the project, El Economista reports.


Forbes
36 minutes ago
- Forbes
2025 Halftime: AI's Four Forces - What Happened, What's Coming
Demis Hassabis, Co-Founder and CEO, Google DeepMind, speaks at a Google I/O event in Mountain View, ... More Calif., Tuesday, May 20, 2025. (AP Photo/Jeff Chiu) Monday, June 30th delivered a triple whammy of news that perfectly captures AI's current state. Meta unveiled its Superintelligence Labs after spending $14.3 billion to invest in Scale AI and poach talent from OpenAI, Anthropic, Google and others. Microsoft claimed its AI diagnoses patients 4x better than doctors. And the White House launched an AI youth education pledge, acknowledging AI literacy is as essential as reading. These weren't isolated events. Over the past six months we've seen some of the most dramatic AI disruptions since ChatGPT's debut in November 2022. If you zoom out, they're symptoms of four converging AI forces reshaping at unprecedented speed every aspect of society from people to businesses to governments: compute, data, algorithms, and robotics (often referred to as 'physical AI'). Compute costs have plummeted 25x, synthetic data is reducing AI training expenses, breakthroughs like DeepSeek along with model updates from OpenAI, Anthropic, xAI, and Google continue to push the limits of scaling laws, and humanoid robot pioneers such as Tesla (Optimus), Figure AI and Agility Robotics are preparing to commercialize physical AI-powered robots starting in late 2025. Progress in any one of these forces would be impressive on their own. When combined, these forces are amplifying each other to create unprecedented opportunities for prepared businesses and existential threats for the unprepared. As we cross 2025's midpoint, it's time to assess what just happened and brace for what's coming. AI Force #1 • Compute: The Paradox of Plenty Compute is the raw processing power driving AI - the brain power. The first half of 2025 revealed a paradox: while compute options multiplied, actual GPU availability remained critically constrained. NVIDIA's Blackwell Ultra announcement promised 50% better performance at 25x less power—if you could get them. With 36-52 week lead times and allocation politics determining who gets chips, the "democratization" remains theoretical. We're seeing the rise of accelerated computing as GPU performance is growing 2x per year While NVIDIA's Project DIGITS offers $3,000 desktop AI, the real action stayed in data centers where Microsoft Azure's 35% growth meant fierce competition for H100s and A100s. Google's TPU v7 claims 24x better economics, but its hard to access them outside Google Cloud. Intel's Gaudi 3 at $125,000 looks attractive until you realize the software ecosystem barely exists. The brutal truth: despite AMD's quantization efforts and edge computing promises, if you need serious training compute in H1 2025, you're either paying NVIDIA's prices, waiting in line, or making do with inferior alternatives. DeepSeek's $6 million miracle wasn't about abundant compute - it was about doing more with less because they had no choice. What's next: Blackwell production ramps from 200,000 to 2 million units by December, finally breaking the GPU stranglehold. Expect large training cost reductions and mid-market companies achieving GPT-4 capabilities for under $10,000/month. AI Force #2 • Data : The Insatiable Hunger Data is the oxygen of AI. Without quality data, even the most sophisticated algorithms suffocate, making the difference between AI that demos well and AI that delivers measurable business value. The first half of 2025 marked data's shift from volume obsession to quality management, especially as all publicly-available internet data has essentially already been ingested into LLMs, creating a scramble for new data sources. Synthetic data - AI-generated information that mimics real-world patterns without containing actual user data - is a key part of AI's growth story. The synthetic data market is expected to reach $3.7 billion by 2030, but most implementations remain basic data augmentation. Every player, both small and large, continue to search for more data. Just look at Meta's $14.3 billion Scale AI investment - part 'acqui-hire' for sure, but it was also for direct access to Scale's data labeling expertise and access to enterprise data partnerships. As IP rights are being debated, a major ruling came when Anthropic's copyright victory legitimized training on copyrighted material. The hunger for data is ravenous. Scale AI is a key player in the AI data industry, specializing in data annotation and model ... More evaluation services that are essential for developing and deploying advanced artificial intelligence applications. The last two data frontiers remain stubbornly out of reach. First-party enterprise data is locked behind corporate firewalls (containing decades of proprietary business intelligence), so every AI firm is now focused on how best to partner (and penetrate) the enterprise. The other source is real-world sensor data that's critical for physical AI. While Tesla's builds its humanoid robot fleet Optimus, it will benefit from the billions of miles of driving data that synthetic generation can't replicate. What's next: The majority of major companies will adopt synthetic data strategies by December. The first AI model trained entirely on synthetic data will outperform human-trained models, ending the "data is the new oil" era. But the real battle shifts to enterprise data - expect aggressive partnerships and "data-for-compute" deals. AI Force #3 • Algorithms : Surpassing Scaling Laws If compute is brain power and data is oxygen, then algorithms are the neural pathways - the connections and patterns that determine how efficiently the brain uses oxygen to produce intelligence. And we are on the road to superintelligence - just read the essays from OpenAI's Sam Altman or Anthropic's Dario Amodei. The first half of 2025 shattered every assumption about scaling laws and compute requirements. DeepSeek's R1 bombshell - reportedly achieving GPT-4 parity for $6 million versus $100+ million - wiped out $1 trillion dollars in market capitalization and sparked global panic back in January. However, by June the markets were back at new highs as the industry realized that this wasn't just cost reduction, it was algorithmic innovation as they used mixture-of-experts, aggressive sparsity, and clever routing. There have been 50 major model releases in just the first six months of 2025, with lots of different sizes, features, and use cases (see the table below). 50 major large language models have been released in the first six months of 2025 Open-weight models closed the gap with their closed-weight counterparts. In January 2024, the leading closed-weight model outperformed the top open-weight model by 8.04% on the Chatbot Arena Leaderboard, and by February 2025 this gap had narrowed to 1.70%. Claude 4 Opus hitting 72.5% on SWE-bench while coding autonomously for 7+ hours showed reasoning, not size, wins. Google's Gemini 2.5 Flash at 742 tokens/second redefined inference economics. By June, enterprise costs plummeted from $10,000 to sub-$1,000 monthly for equivalent performance. The truth that every LLM researcher knows is that most top models are now within the range of +/- 5% of each other, so we're waiting for the next step-function in innovation. Some of the focus is shifting from model training to system design - companies seeing 60% higher ROI focus on prompt engineering, RAG implementation, and workflow integration. While the "bigger is better" model was questioned in H125, what is becoming clear is that there will be many flavors for lots of different use cases. What's next: The "bigger is better" era ends. Agentic AI takes over - expect many companies to start having customer service handled by autonomous agents and perhaps whole departments being run entirely by AI. Salesforce already reports 30-50% of work done by AI. AI Force #4: Robotics : From Labs to Loading Docks Robotics and physical AI represent the final frontier in business transformation, with over 4 million industrial robots now operating globally and installations growing at 7% annually. Tesla aims to produce "several thousand" Optimus humanoid robots in 2025 for internal use, targeting sub-$30,000 pricing that could revolutionize labor economics. Figure AI's $39.5 billion valuation after raising $1.5 billion demonstrates investor confidence in embodied intelligence, while Agility Robotics' Digit achieved the first commercial humanoid deployment at GXO Logistics. Figure Unveils Next-Gen Conversational Humanoid Robot With 3x AI Computing for Fully Autonomous ... More Tasks The business case has shifted from future promise to present reality. Industrial automation delivers 12-24 month payback periods for large-scale deployments, with robots operating at $0.75/hour versus human labor costs. Manufacturing labor costs drop 20-30% with robotic automation while productivity increases 150% in equipment manufacturing. Agricultural drones, numbering 620,000 worldwide and growing 40% annually, exemplify how physical AI transforms traditional industries through precision and scale. Breakthrough capabilities in 2025 include 24+ hour autonomous operation with 99%+ reliability, multimodal perception combining vision and touch, and natural language control eliminating specialized programming. Yet adoption barriers remain: battery limitations, integration complexity with legacy systems, and a critical skills gap in robotics operation. The winners embrace Robotics-as-a-Service models to reduce capital requirements, invest in workforce training for human-robot collaboration, and pilot solutions in controlled environments before scaling. What's next: The first 10,000+ humanoid deployment hits warehouses. China deploys over a million service robots. We start to see the first "dark factories" operating 24/7 without humans. Robot-as-a-Service becomes a new growth market as companies offload capital expenditures. Business leaders must act on converging opportunities The four forces create immediate risks and opportunities for today's business leaders. First, reassess infrastructure investments given that algorithmic advances can deliver 95% cost savings—your planned GPU purchases may already be obsolete. Second, implement edge computing to reduce cloud dependencies by 60-90% while improving response times and data sovereignty. Third, embrace synthetic data to accelerate AI development while maintaining privacy compliance, joining the 60% of projects already benefiting from this approach. Medium-term strategies should focus on building AI implementation expertise rather than model development capabilities, as the 1.7% performance gap between open and proprietary models makes execution more important than selection. Develop hybrid human-robot workflows in operations, targeting the proven 2-year ROI rather than full automation fantasies. Create comprehensive data governance frameworks that treat information as a strategic asset, enabling the multimodal integration that drives next-generation business models. For long-term positioning, prepare for the algorithmic efficiency paradigm where smaller, optimized models outperform larger ones, making capital-intensive infrastructure strategies obsolete. Build partnerships that provide access to specialized capabilities rather than attempting to develop everything internally. Most critically, invest in workforce transformation. The organizations lacking sufficient AI talent will become exposed and lose out to those who develop these capabilities. The convergence of plummeting compute costs, synthetic data accessibility, algorithmic breakthroughs, and practical robotics has created a unique window for business transformation. Organizations that recognize these forces aren't developing independently but amplifying each other will capture disproportionate value. The question isn't whether to embrace AI-driven transformation, but whether you'll lead it or be disrupted by competitors who do. The tools are accessible, the economics are proven, and the early movers are already capturing market share. What remains is execution and the window for strategic advantage is narrowing rapidly.