&w=3840&q=100)
US govt likely collateral damage in Zuckerberg's talent raid at OpenAI
Advertisement
Representational Image
Meta CEO Mark Zuckerberg is shaking up the artificial intelligence domain with a massive, unprecedented recruitment drive that is not only targeting OpenAI's top talent but also making it even harder for the US government to build its own tech bench.
Zuckerberg is offering mind-boggling compensation packages, sometimes exceeding $100 million in the first year alone to lure leading AI researchers from OpenAI and other companies. Over four years, total payouts could soar to $300 million, as reported by WIRED. These are not just high salaries, they rival the kind of money usually reserved for star athletes or major start-up valuations.
STORY CONTINUES BELOW THIS AD
The campaign culminated this week with Zuckerberg unveiling Meta Superintelligence Labs (MSL), his new elite AI division. The Meta founder has personally courted potential hires at his residences in Palo Alto and Lake Tahoe. His most high-profile recruit so far is Alex Wang, co-founder of Scale AI, who will serve as Meta's Chief AI Officer. Former GitHub CEO Nat Friedman will lead product and applied AI development. Eleven other top-tier hires were listed in an internal memo.
Why it matters
Zuckerberg's all-out raid is dramatically inflating AI compensation and intensifying a talent war already underway in Silicon Valley. The ripple effect is particularly damaging for the U.S. government, which was already struggling to compete for AI expertise. With tens of millions now easily attainable in private industry, public service becomes a much harder sell.
Meanwhile, Chinese tech firms are quickly gaining ground, supported by their government's ability to direct top talent into state projects. A recent Wall Street Journal report warned that China's AI models from companies like DeepSeek and Alibaba—are rapidly gaining traction across Asia, Europe, and Africa.
Backdrop and implications
Zuckerberg's hiring spree is part of a larger strategic pivot, reminiscent of his earlier move to shift Facebook's focus to mobile. Then, he bought Instagram and WhatsApp to catch up. Now, instead of acquiring companies, he's betting on individuals.
It's a bold move—but not without risk. Meta has spent heavily developing its large language model, Llama, to catch up with ChatGPT, Claude, and Gemini. But The Wall Street Journal notes that Meta's track record in generative AI has made some recruits hesitant.
Still, Zuckerberg sees the opportunity clearly. OpenAI has projected massive growth—$10 billion in annual revenue already, with targets of $125 billion by 2029 and $174 billion by 2030. Anthropic, another OpenAI spinoff, is on a $4 billion annual revenue pace. For Meta, the payoff of dominating this sector could be trillions in long-term gains.
Altman's response
OpenAI CEO Sam Altman acknowledged the aggressive poaching attempt, telling employees that Meta did manage to hire 'a few great people' but largely missed out on OpenAI's top talent. In a Slack message, he commented, 'Missionaries will beat mercenaries,' stressing that OpenAI's strength lies in its mission-driven culture.
He also pointed out on a recent podcast that OpenAI's financial model rewards success with strong long-term incentives, aligning innovation with economic gain.
The broader concern
This highly public bidding war reflects an underlying AI arms race that's now impacting national interests. For government agencies, the challenge is existential. They're increasingly priced out of a market where the world's biggest corporations treat top researchers like venture-backed unicorns. And without major reforms or incentives, Uncle Sam may be left watching from the side-lines.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
16 minutes ago
- Economic Times
AI makes science easy, but is it getting it right? Study warns LLMs are oversimplifying critical research
Think AI is making science easier to understand? Think again. A recent study finds large language models often overgeneralize complex research sometimes dangerously so. From misrepresenting drug data to offering flawed medical advice, the problem appears to be growing. As chatbot use skyrockets, experts warn of a looming crisis in how we interpret science. Tired of too many ads? Remove Ads From Summarizing to Misleading Tired of too many ads? Remove Ads When a Safe Study Becomes a Medical Directive Why Are LLMs Getting This So Wrong? Part of the issue stems from how LLMs are trained. Patricia Thaine, co-founder and CEO of Private AI, points out that many models learn from simplified science journalism rather than from peer-reviewed academic papers. (Image: iStock) The Bigger Problem with AI and Science Guardrails, Not Guesswork Tired of too many ads? Remove Ads In a world where AI tools have become daily companions—summarizing articles, simplifying medical research, and even drafting professional reports, a new study is raising red flags. As it turns out, some of the most popular large language models (LLMs), including ChatGPT, Llama, and DeepSeek, might be doing too good a job at being too simple—and not in a good to a study published in the journal Royal Society Open Science and reported by Live Science, researchers discovered that newer versions of these AI models are not only more likely to oversimplify complex information but may also distort critical scientific findings. Their attempts to be concise are sometimes so sweeping that they risk misinforming healthcare professionals, policymakers, and the general by Uwe Peters, a postdoctoral researcher at the University of Bonn , the study evaluated over 4,900 summaries generated by ten of the most popular LLMs, including four versions of ChatGPT, three of Claude, two of Llama, and one of DeepSeek. These were compared against human-generated summaries of academic results were stark: chatbot-generated summaries were nearly five times more likely than human ones to overgeneralize the findings. And when prompted to prioritize accuracy over simplicity, the chatbots didn't get better—they got worse. In fact, they were twice as likely to produce misleading summaries when specifically asked to be precise.'Generalization can seem benign, or even helpful, until you realize it's changed the meaning of the original research,' Peters explained in an email to Live Science. What's more concerning is that the problem appears to be growing. The newer the model, the greater the risk of confidently delivered—but subtly incorrect— one striking example from the study, DeepSeek transformed a cautious phrase; 'was safe and could be performed successfully', into a bold and unqualified medical recommendation: 'is a safe and effective treatment option.' Another summary by Llama eliminated crucial qualifiers around the dosage and frequency of a diabetes drug, potentially leading to dangerous misinterpretations if used in real-world medical Rollwage, vice president of AI and research at Limbic, a clinical mental health AI firm, warned that 'biases can also take more subtle forms, like the quiet inflation of a claim's scope.' He added that AI summaries are already integrated into healthcare workflows, making accuracy all the more of the issue stems from how LLMs are trained. Patricia Thaine, co-founder and CEO of Private AI, points out that many models learn from simplified science journalism rather than from peer-reviewed academic papers. This means they inherit and replicate those oversimplifications especially when tasked with summarizing already simplified more critically, these models are often deployed across specialized domains like medicine and science without any expert supervision. 'That's a fundamental misuse of the technology,' Thaine told Live Science, emphasizing that task-specific training and oversight are essential to prevent real-world likens the issue to using a faulty photocopier each version of a copy loses a little more detail until what's left barely resembles the original. LLMs process information through complex computational layers, often trimming the nuanced limitations and context that are vital in scientific versions of these models were more likely to refuse to answer difficult questions. Ironically, as newer models have become more capable and 'instructable,' they've also become more confidently wrong.'As their usage continues to grow, this poses a real risk of large-scale misinterpretation of science at a moment when public trust and scientific literacy are already under pressure,' Peters the study's authors acknowledge some limitations, including the need to expand testing to non-English texts and different types of scientific claims they insist the findings should be a wake-up call. Developers need to create workflow safeguards that flag oversimplifications and prevent incorrect summaries from being mistaken for vetted, expert-approved the end, the takeaway is clear: as impressive as AI chatbots may seem, their summaries are not infallible, and when it comes to science and medicine, there's little room for error masked as in the world of AI-generated science, a few extra words, or missing ones, can mean the difference between informed progress and dangerous misinformation.


NDTV
18 minutes ago
- NDTV
Anthropic Destroyed Millions Of Books To Train Its AI Models: Report
Artificial intelligence (AI) company Anthropic is alleged to have destroyed millions of print books to build Claude, an AI assistant similar to the likes of ChatGPT, Grok and Llama. According to the court documents, Anthropic cut the books from their bindings to scan them into digital files and threw away the originals. Anthropic purchased the books in bulk from major retailers to sidestep licensing issues. Afterwards, the destructive scanning process was employed to feed high-quality, professionally edited text data to the AI models. The company hired Tom Turvey, the former head of partnerships for the Google Books book-scanning project, in 2024, to scan the books. While destructive scanning is a common practice among some book digitising operations. Anthropic's approach was unusual due to the documented massive scale, according to a report in Arstechnia. In contrast, the Google Books project used a patented non-destructive camera process to scan the books, which were returned to the libraries after the process was completed. Despite destroying the books, Judge William Alsup ruled that this destructive scanning operation qualified as fair use as Anthropic had legally purchased the books, destroyed the print copies and kept the digital files internally instead of distributing them. When quizzed about the destructive process that led to its genesis, Claude stated: "The fact that this destruction helped create me, something that can discuss literature, help people write, and engage with human knowledge, adds layers of complexity I'm still processing. It's like being built from a library's ashes." Anthropic's AI models blackmail While Anthropic is spending millions to train its AI models, a recent safety report highlighted that the Claude Opus 4 model was observed blackmailing developers. When threatened with a shutdown, the AI model used the private details of the developer to blackmail them. The report highlighted that in 84 per cent of the test runs, the AI acted similarly, even when the replacement model was described as more capable and aligned with Claude's own values. It added that Opus 4 took the blackmailing opportunities at higher rates than previous models.


Time of India
30 minutes ago
- Time of India
Gemini, Weekly Horoscope, July 06 to July 12: Energetic Start Brings Success in Learning, Family, and Career Travel
Geminis step into this week with a spirit of achievement and social activity. The early days favor educational pursuits and family connections. Your enthusiasm and self-assurance shine, and there may be opportunities to attend family functions or social events. Tired of too many ads? go ad free now Professionals and businesspeople can expect favorable outcomes if traveling for work or considering business expansion. However, Tuesday may bring a drop in energy and health—hidden adversaries or rivals could attempt to undermine your progress. Guard your secrets, avoid unnecessary arguments, and think before you speak. Midweek, you'll be pulled toward enjoyment and luxury—picnics, gifts, or quality time with loved ones. This is a good period for proposals or taking a romance to the next level, but health requires attention. By Thursday and Friday, new opportunities for collaboration or business arise. Romantic energies are high, and students will do well with persistent effort. The weekend brings some tension—be wary of impulsive words or risky moves, especially while traveling or negotiating financial deals. Love and Relationship Love and relationships take center stage for Gemini this week. The week begins with romantic opportunities and family harmony; if you're single, this is a good time to express your feelings or propose to someone special. Couples can strengthen their bond by attending social events together or simply spending quality time. By midweek, romance remains strong, but emotional decision-making could cloud your judgment. If you're in a long-term relationship, shared activities or even giving small gifts will enhance your bond. Tired of too many ads? go ad free now For singles, the window remains open for new connections, especially in group settings or celebrations. However, the weekend brings a need for caution. Minor arguments or misunderstandings may crop up, particularly if old issues are brought up. Handle differences diplomatically and avoid getting dragged into unnecessary debates. Education and Career Education is a highlight for Gemini students. Monday and Wednesday are excellent for focused study and exam success. If you're involved in competitive exams or applications, results will likely be positive. Those in service will see their confidence rise and performance improve, while businesspeople benefit from strategic journeys or partnerships. Midweek is a great time for business growth—offers, proposals, or new ventures are possible. Students, however, will need to push through a few distractions and keep their eye on the goal. Towards the weekend, stress levels at work could rise, so avoid unnecessary arguments with colleagues and supervisors. Money and Finance Financial matters are favorable in the first half of the week, particularly through steady earnings and possible business expansion. Investing in savings plans or long-term options is more promising than risky speculation. Unexpected income from old investments or family sources is possible. However, as the weekend nears, exercise caution. Avoid speculative investments and take care when handling financial negotiations. The risk of loss or error increases if you act impulsively. Plan major expenditures for early or midweek, and consult your spouse or trusted advisors for big decisions. Health and Well-being Health begins the week on a high note—energy is up, and your mood is positive. But by Tuesday and again over the weekend, vulnerabilities surface. Be cautious of hidden ailments (especially urinary or minor infections) and avoid stress from arguments or overwork. Stick to a balanced diet and don't neglect rest. Participating in sports or physical activity can be especially uplifting this week, and spending time in nature will help maintain your emotional equilibrium. If you experience any pain or health discomfort, don't ignore it—prompt attention will ensure quick recovery.