
OpenAI starts using Google AI chips to cut costs and rely less on Microsoft, Nvidia: Report
Despite the new partnership, Google has reportedly chosen not to offer OpenAI access to its most advanced TPUs. A Google Cloud employee reportedly told The Information that the company is limiting OpenAI's use of its highest-end chips, likely to maintain a competitive edge. The shift to Google's hardware comes at a time when OpenAI's relationship with Microsoft is showing signs of strain. Microsoft has been OpenAI's biggest investor. The company put in $1 billion in 2019 and integrated OpenAI's technology into its products such as Microsoft 365 and GitHub Copilot.Meanwhile, The Wall Street Journal recently reported that OpenAI executives had discussed whether to accuse Microsoft of anticompetitive conduct and potentially seek a regulatory review of their contract. Though both companies issued a joint statement expressing optimism about their continued collaboration, behind-the-scenes negotiations over financial terms and operational control reportedly remain unresolved. One of the biggest sticking points is apparently around revenue sharing. OpenAI is reportedly planning to reduce the share of revenue it pays Microsoft, from the current 20 percent to just 10 percent by 2030.Additionally, the two firms have not yet agreed on how to structure Microsoft's equity stake in a reconfigured unit of OpenAI, nor on the rights to future profits.OpenAI has also asked to revisit an exclusivity clause that gives Microsoft sole rights to host its models on the Azure cloud. With its growing computing needs and new reliance on Google Cloud's TPUs, the company is looking to create more flexibility in its cloud infrastructure strategy.Another reported source of friction is OpenAI's $3 billion acquisition of Windsurf, a startup focused on AI-powered coding. Microsoft is said to be pushing for access to Windsurf's intellectual property, potentially to strengthen its own GitHub Copilot product. OpenAI, however, has apparently resisted these efforts. - Ends

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Hindu
22 minutes ago
- The Hindu
Meta poaches top AI talent from OpenAI and DeepMind as Zuckerberg escalates AI push
The minds that brought you the conversational magic of ChatGPT and the multimodal power of Google's Gemini now have a new home: Meta. In a stunning talent exodus, captured in a single tweet by Meta's new AI chief, Alexandr Wang, the architects of the current AI revolution have been poached from OpenAI, Google DeepMind, and Anthropic. Mr. Alexandr's tweet is not merely a hiring announcement; it is a declaration of intent. By announcing his role as Chief AI Officer at Meta alongside Nat Friedman and a veritable 'who's who' of top-tier AI researchers, Mr. Alexandr and Meta were signaling a seismic shift in the technology landscape. This mass talent acquisition from rivals like OpenAI, Google DeepMind, and Anthropic is CEO Mark Zuckerberg's most audacious move yet to dominate the next technological frontier. It represents a calculated effort to bolster Meta's AI venture by poaching the very minds that built its competitors' greatest successes. However, this aggressive pivot towards superintelligence cannot be viewed in a vacuum. It is haunted by the ghosts of Meta's past — from the Cambridge Analytica scandal to the Instagram teen mental health crisis — forcing a critical examination of whether Zuckerberg has evolved from a disruptive force into a responsible steward for the age of AI. Why Meta's AI talent acquisition signals a new era The list of new hires is a strategic masterstroke — it's not just about adding headcount; it's about acquiring institutional knowledge and simultaneously weakening the competition. According to reports, Mr. Zuckerberg has personally handled these AI hires. And he has carefully picked top talent from all his rivals. From OpenAI, Meta has poached the creators behind GPT-4o's groundbreaking voice and multimodal capabilities and foundational model builder. Shengjia Zhao, the co-creator of ChatGPT and GPT-4, is now part of Meta. This is a significant loss to Sam Altman's AI company. From Google DeepMind, Mr. Zuckerberg has poached Jack Rae, the pre-training tech lead for Gemini 2.5, and other experts in the text-to-image generation. From Anthropic, Meta has poached Joel Pobar, the AI firm's inference expert. This talent raid provides Meta with some immediate advantages. First, it gives the company instant credibility that it quite serious about its AI bet as the new team has direct, hands-on experience building and training the world's most advanced models. Second, it disrupts the roadmaps of its competitors, forcing them to regroup and replace key personnel. Third, it creates a powerful gravitational pull for future talent, signaling that Meta is now the premier destination for ambitious AI work, backed by near-limitless computational resources and a direct path to impacting billions of users. Can Zuckerberg be trusted with the future of AI? This aggressive push into AI stands in stark contrast to the defining scandals of Zuckerberg's career. The Cambridge Analytica affair revealed a fundamental flaw in Facebook's DNA: a platform architecture that prioritized growth and data collection over user privacy and security, which was then exploited for political manipulation. The company's response was slow, defensive, and ultimately insufficient to repair the deep chasm of public trust. Then, 'The Facebook Files' exposé by The Wall Street Journal detailed internal research showing that Meta knew Instagram was toxic for the mental health of teenage girls. The company's leadership chose to downplay the findings and continue with product strategies that exacerbated these harms. Both incidents stem from the same root philosophy: 'move fast and break things,' a mantra that prioritizes scale and engagement above all else, with societal consequences treated as unfortunate but acceptable collateral damage. Applying this ethos to AI, a technology with far greater potential for both good and harm, is a terrifying prospect. If a social feed algorithm could destabilize democracies and harm teen self-esteem, what could a superintelligent agent, deployed to three billion users with the same growth-at-all-costs mindset, be capable of? Mr. Zuckerberg's past misadventures are not just historical footnotes; they are the core reason for public skepticism towards Meta's AI ambitions. How Zuckerberg has evolved from social media to superintelligence Mr. Zuckerberg's character, as observed through his actions over two decades, is one of relentless, almost singular, ambition. He has consistently demonstrated a willingness to be ruthless in competition (cloning Snapchat's features into Instagram Stories), a visionary in long-term bets (acquiring Instagram and WhatsApp, pivoting to the Metaverse), and an ability to withstand immense public and regulatory pressure. His critics would argue he is a leader who lacks a deep-seated ethical framework, often optimizing for power and market dominance while retroactively applying ethical patches only when forced by public outcry. His defenders might say he is a pragmatic engineer who is learning and adapting. The Cambridge Analytica scandal arguably forced him to mature from a hoodie-wearing coder into a global CEO who must at least speak the language of governance and responsibility. How Meta's AI super-team challenges OpenAI and Google The crucial question is whether this change is superficial or substantive. His current strategy with AI suggests a potential evolution. The open-sourcing of the Llama models can be interpreted in two ways. On one hand, it's a shrewd business move to commoditise the layer of the stack where OpenAI and Google have a strong lead, fostering an ecosystem dependent on Meta's architecture. On the other, it can be framed as a commitment to transparency and democratisation, a direct response to the 'black box' criticism leveled at his past operations. This new 'super-team' will be the ultimate test. Will they be fire-walled by a new ethical charter, or will the immense pressure from Mr. Zuckerberg to 'win' the AI race override all other considerations? How is Meta positioning itself for the AI age Against the closed, API-first models of OpenAI and the integrated-but-cautious approach of Google, Meta is carving out a unique strategic position. It is fighting the war on two fronts — by making Llama an open-source alternative, Meta is making itself the default foundation for thousands of startups, researchers, and developers, disrupting the business models of its rivals. Mr. Zuckerberg hasn't stopped with that, he has also publicly committed to acquiring hundreds of thousands of high-end NVIDIA GPUs, signaling that his company will not be outspent on compute. With the addition of this new team, Meta completes the trifecta: massive data, unparalleled compute, and now, world-leading human talent. The goal is no longer just to build a chatbot for Messenger or an image generator for Instagram. As Mr. Alexandr's tweet boldly states, the aim is 'Towards superintelligence.' This is a direct challenge to the stated missions of DeepMind and OpenAI. The formation of this AI super-team is the culmination of Mr. Zuckerberg's pivot from social media king to aspiring AI emperor. It is an act of immense strategic importance, one that immediately elevates Meta to the top tier of AI development. Yet, the success of this venture will not be measured solely by the capability of the models it produces. It will be measured by whether Mr. Zuckerberg can build an organization that has learned from the profound societal failures of its past. This is a defining gambit for Meta founder — a chance to redefine his legacy not as the creator of a divisive social network, but as the leader who responsibly ushered in the age of artificial intelligence.

The Hindu
23 minutes ago
- The Hindu
FTC seeks more information about SoftBank's Ampere deal: Report
The U.S. Federal Trade Commission is seeking more details about SoftBank Group Corp's planned $6.5 billion purchase of semiconductor designer Ampere Computing, Bloomberg reported on Tuesday. The inquiry, known formally as a second request for information, suggests the acquisition may undergo an extended government review, the report said. SoftBank announced the purchase of the startup in March, part of its efforts to ramp up its investments in artificial intelligence infrastructure. The report did not state the reasoning for the FTC request. SoftBank, Ampere and the FTC did not immediately respond to a request for comment. SoftBank is an active investor in U.S. tech. It is leading the financing for the $500 billion Stargate data centre project and has agreed to invest $32 billion in ChatGPT-maker OpenAI.


Time of India
37 minutes ago
- Time of India
No AI without immigration: All 11 of Zuckerberg's new superintelligence hires are immigrants who completed undergraduates abroad
Whether one likes it or not, America was built by immigrants, the wretched masses that the Statue of Liberty inscription asks to welcome with open arms. From Albert Einstein to Elon Musk, immigrants made America a great country, and now it looks like they are making America great again (MAGA, as the slogan goes), and this time in artificial intelligence. America's edge isn't just its innovation, it's the global brainpower who has been giving an edge to the country for centuries. And nothing makes that clearer than the news out of Meta's new Superintelligence Lab: 11 top AI researchers hired, and every single one of them is an immigrant. That's right, not one US undergrad among them. These are elite engineers, model architects, and machine learning pioneers who began their academic journeys in India, China, South Africa, the UK, and Australia. They trained in America's best labs, built some of the most advanced AI systems at OpenAI, DeepMind, and Anthropic, and now they've been snapped up by Zuckerberg to build the next wave of intelligence. If you want to understand why the US leads the world in AI, you don't need to look at GPUs or model benchmarks. Just look at the passports of the people writing the code. Immigrants have long been the silent engine of Silicon Valley. From coding at Google to architecting cloud infrastructure at Microsoft, they form the backbone of America's biggest tech empires. A significant number of these engineers hail from India, whose talent pipeline has become essential to the innovation output of the U.S. tech sector. They don't just fill jobs, they define the direction of the industry. Now, as AI becomes the next great technological frontier, these same minds are shaping the models, algorithms, and safety protocols that will govern its future. Meta's new superintelligence team is just the latest proof that without immigration, there is no AI revolution. Top AI talent hired by Meta to power its superintelligence lab by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Adidas Three Shorts With 60% Discount, Limited Stock Available Original Adidas Shop Now Undo Trapit Bansal Background: Previously at OpenAI Education: Bachelor of Science (BS), Indian Institute of Technology (IIT) Kanpur, India Doctor of Philosophy (PhD), University of Massachusetts Amherst, USA Profile: Bansal is recognized for his groundbreaking work in reinforcement learning applied to chain-of-thought reasoning, a technique that helps AI models improve logical step-by-step problem-solving. As a co-creator of the o-series models, he has set new standards in large language model performance and reliability. Bansal's research is widely cited and has influenced both academic and industrial AI systems. Shuchao Bi Background: Previously at OpenAI Education: Bachelor of Science (BS), Zhejiang University, China Doctor of Philosophy (PhD), University of California, Berkeley, USA Profile: Bi is a leader in multimodal AI, having co-developed the voice mode for GPT-4o and the o4-mini model. His expertise lies in integrating speech and text, enabling AI to understand and generate content across different formats. Bi's work has been crucial in advancing conversational AI and making digital assistants more natural and accessible. Huiwen Chang Background: Previously at Google Research Education: Bachelor of Science (BS), Tsinghua University, China Doctor of Philosophy (PhD), Princeton University, USA Profile: Chang is known for inventing the MaskIT and Muse architectures, which have become foundational in generative image models. She led the GPT-4o image generation team, driving innovations that allow AI to create high-quality, realistic images from text prompts. Her research bridges the gap between computer vision and creative AI applications. Ji Lin Background: Previously at OpenAI Education: Bachelor of Engineering (BEng), Tsinghua University, China Doctor of Philosophy (PhD), Massachusetts Institute of Technology (MIT), USA Profile: Lin has played a key role in optimizing and scaling large language models, including GPT-4o and the o4 family. His work focuses on model efficiency and image generation, making state-of-the-art AI more accessible and cost-effective for real-world deployment. Lin's contributions are highly regarded in both academia and industry. Joel Pobar Background: Previously at Anthropic & Meta Education: Bachelor of Information Technology (Honours), Queensland University of Technology (QUT), Australia Profile: Pobar is a veteran infrastructure specialist with over a decade of experience building scalable AI systems. He has contributed to major projects like HHVM, Hack, and PyTorch, which are widely used in the tech industry. Pobar's expertise ensures that AI models run efficiently at scale, supporting rapid innovation and deployment. Jack Rae Background: Previously at Google DeepMind Education: Bachelor of Science (BS), University of Bristol, UK Master of Science (MS), Carnegie Mellon University (CMU), USA Doctor of Philosophy (PhD), University College London (UCL), UK Profile: Rae is a leading figure in large-scale language model research, having led pre-training for Gemini 2.5 and developed Google's Gopher and Chinchilla models. His work has advanced the understanding of model scaling laws and data efficiency, shaping the next generation of AI systems. Hongyu Ren Background: Previously at OpenAI Education: Bachelor of Science (BS), Peking University, China Doctor of Philosophy (PhD), Stanford University, USA Profile: Ren has contributed to the development and post-training of GPT-4o and the o1/3/4o-mini models, focusing on improving robustness and reliability. His research addresses key challenges in making AI models safer and more trustworthy for widespread use. Johan Schalkwyk Background: Previously at Google Education: Bachelor of Science in Engineering (BS Meng), University of Pretoria, South Africa Profile: Schalkwyk is a Google Fellow and a pioneer in speech recognition, having led the Maya team and contributed to the early development of Google's Sesame project. His innovations have set industry standards in voice technology, powering products used by millions worldwide. Pei Sun Background: Previously at Google DeepMind Education: Bachelor of Science (BS), Tsinghua University, China Master of Science (MS), Carnegie Mellon University (CMU), USA Profile: Sun specializes in post-training and reasoning for advanced AI models, including Gemini, and has developed perception systems for Waymo's self-driving cars. His work bridges AI research and real-world applications, particularly in autonomous systems and robotics. Jiahuai Yu Background: Previously at OpenAI & Gemini Education: Bachelor of Science (BS), University of Science and Technology of China (USTC), China Doctor of Philosophy (PhD), University of Illinois Urbana-Champaign (UIUC), USA Profile: Yu has made significant contributions to perception and multimodal AI, working on o3/4o-mini and GPT-4/4o. His research enables AI to interpret and generate information across text, images, and other modalities, expanding the possibilities of human-AI interaction. Shengjia Zhao Background: Previously at OpenAI Education: Bachelor of Science (BS), Tsinghua University, China Doctor of Philosophy (PhD), Stanford University, USA Profile: Zhao is a co-creator of ChatGPT, GPT-4, and o4-mini, and has led research on data synthesis and AI safety. His work has had a direct impact on the reliability and ethical deployment of generative AI, making him a respected voice in the field. AI Masterclass for Students. Upskill Young Ones Today!– Join Now