
Microsoft and OpenAI forged a close bond. Why it's now too big to last.
When Microsoft and OpenAI first got together in 2019, the most powerful artificial intelligence in the world was literally playing games. AlphaGo from Google's DeepMind lab was the first machine to beat human Go champions, but that's all it did. AI as we know it now was still in its research phase.
Venture capital's focus was on cloud and cryptocurrency start-ups, but Microsoft saw something in the nonprofit AI lab called OpenAI, which had just come off a bruising leadership battle that saw Sam Altman prevail over Elon Musk. Without Musk's billions of dollars, OpenAI changed to a bespoke structure in which a for-profit AI lab is controlled by a nonprofit board. Investors' returns were capped at 100 times their stake.
The reorganization cleared the way for Microsoft to invest $1 billion in OpenAI in 2019. Those funds fueled the release of ChatGPT in November 2022—the spark to the AI prairie fire that is still spreading. Soon thereafter, Microsoft invested another $10 billion, which supported OpenAI's rapid expansion. Since then, the bills have added up, given the high cost of scaling AI.
At first the two companies were symbiotic. All of OpenAI's AI computing is done on Microsoft's Azure cloud. Microsoft has access to all of OpenAI's intellectual property, including its catalog of models that underpin a range of AI services Microsoft offers with its Copilot products. When the OpenAI nonprofit board ousted Altman in a November 2023 coup, Microsoft CEO Satya Nadella backed Altman, a key endorsement that helped restore his post.
But the partnership that made so much sense from 2019 to 2023 has now made each company too dependent on the other. OpenAI has large ambitions, and Sam Altman believes it will need unprecedented computing power to get there, more than Microsoft can provide. He would also like more control over the data-center buildout. Altman's company also has increasingly go-it alone ambitions—it says subscriptions and licenses to ChatGPT are on track to bring in $10 billion a year.
For its part, Microsoft now relies on OpenAI as both a major customer and supplier. That's the kind of concentration risk that should make Microsoft executives nervous.
'OpenAI has become a significant new competitor in the technology industry," Microsoft President Brad Smith said in a February 2024 blog post. This was the first public indication that the relationship may not have been as cozy as some supposed. Microsoft began working on its own AI models that year, and in October 2024, it declined to participate in a $6.6 billion OpenAI funding round.
In January, Microsoft and OpenAI modified their agreement so that Microsoft would no longer be OpenAI's exclusive cloud provider, but would retain right-of-first-refusal for all new business. Microsoft hasn't been exercising that right to any large degree—OpenAI subsequently signed new cloud deals with CoreWeave and Alphabet's Google Cloud, two Microsoft competitors.
The same January day as the deal modification, Altman stood in the Oval Office with President Donald Trump, Oracle Chairman Larry Ellison, and SoftBank Group CEO Masa Son to announce Project Stargate, an ambitious plan to raise $500 billion for a massive cluster of AI data centers controlled by Altman. The partnership and high-profile event made clear that OpenAI had new friends and had moved beyond its Microsoft reliance.
The partnership on display in the Oval Office led to a $40 billion March funding round, led by SoftBank. But it came with a string attached: $20 billion of it is contingent on OpenAI doing another reorganization into a public-benefit corporation by the end of the year, which would give SoftBank and other new investors more conventional investor rights.
But there are key hurdles in the way of that restructuring and the $20 billion, including a lawsuit from Elon Musk and regulatory approvals from California, Delaware, and the federal government. But the biggest obstruction is that Microsoft has a large stake in the current OpenAI. To convert corporate structures, OpenAI will have to negotiate new terms, and in a ticking-clock scenario like this, Microsoft has all the leverage, which grows each day.
According to The Wall Street Journal, negotiations are getting testy. The main point of contention is how much of the new OpenAI Microsoft will own. But there is also the matter of OpenAI's acquisition of an advanced AI coding tool, Windsurf. Under their current arrangement, Microsoft has access to all of OpenAI's IP, and that would include Windsurf. But OpenAI doesn't want this, because Microsoft has its own coding assistant, GitHub Copilot, and this puts the companies on another axis of competition.
In a joint statement, Microsoft and OpenAI told Barron's: 'We have a long-term, productive partnership that has delivered amazing AI tools for everyone. Talks are ongoing and we are optimistic we will continue to build together for years to come."
According to the Journal, OpenAI thinks it could deter Microsoft from dragging out negotiations by keeping open the possibility of publicly accusing Microsoft of antitrust violations and lobbying the White House to open an investigation. Since the Stargate announcement, Altman has had a close relationship with Trump. In this regard, the Journal article is a message from OpenAI: We aren't powerless here.
This is how the divorce could get ugly. Microsoft could slow-walk the talks, and as the end of the year approaches, the pressure would grow on OpenAI to settle, or lose $20 billion in funding. OpenAI, meanwhile, could start pushing on its White House levers to encourage some type of Microsoft investigation—what the WSJ called its 'nuclear option." But like any nuclear exchange, no one would emerge victorious. Microsoft would be tarred, and OpenAI would still miss its $20 billion deadline.
Since the launch of ChatGPT, AI in the U.S. has been dominated by the Microsoft-OpenAI alliance. The now inevitable breakup has everyone scrambling to fill the void.
Write to Adam Levine at adam.levine@barrons.com

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
18 minutes ago
- Hindustan Times
Donald Trump admin building own chatbot to boost govt tech with AI tools: Report
The Trump administration is developing a new website called ' and an API meant to 'accelerate government innovation with AI,' according to 404 Media, which found a draft version of the site with code posted to GitHub. Even though Musk is no longer involved in the government and is publicly feuding with the President, the plan shows that some of DOGE's ideas are still being used.(Pexels) The project seems to be led by the General Services Administration's Technology Transformation Services, which is run by Thomas Shedd, a former engineer at Tesla, according to As per New York Times, Shedd is connected to Elon Musk's Department of Government Efficiency (DOGE), and has publicly supported using AI to spot fraud, review government contracts, and build 'AI coding agents' to write software for federal agencies. The draft version of uncovered by 404 Media (the live link currently redirects to the White House website), outlines three tools as part of the platform and says they are 'powered by the best in American AI.' These include an AI chat assistant, an API that connects to models from OpenAI, Google, and Anthropic, and a dashboard for reviewing how AI is used across agencies. According to 404, the full site was supposed to go live on July 4th. Also Read: Trump rips apart Musk in brutal attack, calls DOGE 'a monster' that may 'go back and eat Elon' Elon Musk no longer involved in the government Even though Musk is no longer involved in the government and is publicly feuding with the President, the plan shows that some of DOGE's ideas are still being used. Shedd, described as a close Musk associate, who has pushed for the GSA to 'operate like a software startup' and has promoted the use of AI tools that other agencies would be required to adopt. The draft site also mentioned plans to work with Amazon Bedrock, Meta's LLaMA, and other FedRAMP-approved vendors, though some of the AI models mentioned do not currently have official government clearance. Federal workers raised concerns Some federal workers have raised concerns about the AI project. Internal reaction has been 'pretty unanimously negative,' with employees worried about possible security issues, the risk of bugs affecting key systems, and the chance that AI could suggest canceling important contracts.


Mint
22 minutes ago
- Mint
Technobabble: We need a whole new vocabulary to keep up with the evolution of AI
The artificial intelligence (AI) news flow does not stop, and it's becoming increasingly obscure and pompous. China's MiniMax just spiked efficiency and context length, but we are not gasping. Elon Musk says Grok will 'redefine human knowledge," but is that a new algorithm or just hot air? Andrej Karpathy's 'Software 3.0" sounds clever but lacks real-world bite. Mira Murati bet $2 billion on 'custom models," a term so vague it could mean anything. And only by testing Kimi AI's 'Researcher" did we get why it's slick and different. Technology now sprints past our words. As machines get smarter, our language lags. Buzzwords, recycled slogans and podcast quips fill the air but clarify nothing. This isn't just messy, it's dangerous. Investors chase vague terms, policymakers regulate without definitions and the public confuses breakthroughs with sci-fi. Also Read: An AI gadget mightier than the sword? We're in a tech revolution with a vocabulary stuck in the dial-up days. We face a generational shift in technology without a stable vocabulary to navigate it. This language gap is not a side issue. It is a core challenge that requires a new discipline: a fierce scepticism of hype and a deep commitment to the details. The instinct to simplify is a trap. Once, a few minutes was enough to explain breakthrough apps like Google or Uber. Now, innovations in robotics or custom silicon resist such compression. Understanding OpenAI's strategy or Nvidia's product stack requires time, not sound-bites. We must treat superficial simplicity as a warning sign. Hot areas like AI 'agents' or 'reasoning layers' lack shared standards or benchmarks. Everyone wants to sell a 'reasoning model,' but no one agrees on what that means or how to measure it. Most corporate announcements are too polished to interrogate and their press releases are not proof of defensible innovation. Extraordinary claims need demos, user numbers and real-world metrics. When the answers are fuzzy, the claim is unproven. In today's landscape, scepticism is not cynicism. It is discipline. This means we must get comfortable with complexity. Rather than glossing over acronyms, we must dig in. Modern tech is layered with convenient abstractions that make understanding easier, but often too easy. A robo-taxi marketed as 'full self-driving' or a model labelled 'serverless' demands that we look beneath the surface. Also Read: Productivity puzzle: Solow's paradox has come to haunt AI adoption We don't need to reinvent every wheel, but a good slogan should never be an excuse for missing what is critical. The only way to understand some tools is to use them. A new AI research assistant, for instance, only feels distinct after you use it, not when you read a review of what it can or cannot accomplish. In this environment, looking to the past or gazing towards the distant future is a fool's errand. History proves everything and nothing. You can cherry-pick the dot-com bust or the advent of electricity to support any view. It's better to study what just happened than to force-fit it into a chart of inevitability. The experience of the past two years has shattered most comfortable assumptions about AI, compute and software design. The infographics about AI diffusion or compute intensity that go viral on the internet often come from people who study history more than they study the present. It's easier to quote a business guru than to parse a new AI framework, but we must do the hard thing: analyse present developments with an open mind even when the vocabulary doesn't yet exist. Also Read: Colleagues or overlords? The debate over AI bots has been raging but needn't The new 'Nostradami' of artificial intelligence: This brings us to the new cottage industry of AI soothsaying. Over the past two years, a fresh crop of 'laws' has strutted across conference stages and op-eds, each presented as the long-awaited Rosetta Stone of AI. We're told to obey Scaling Law (just add more data), respect Chinchilla Law (actually, add exactly 20 times more tokens) and reflect on the reanimated Solow Paradox (productivity still yawns, therefore chatbots are overrated). When forecasts miss the mark, pundits invoke Goodhart's Law (metrics have stopped mattering) or Amara's Law (overhype now, under-hype later). The Bitter Lesson tells us to buy GPUs (graphic processing units), not PhDs. Cunningham's Law says wrong answers attract better ones. Our favourite was when the Victorian-era Jevons' Paradox was invoked to argue that a recent breakthrough wouldn't collapse GPU demand. We're not immune to this temptation and have our own Super-Moore Law; it has yet to go viral. Also Read: AI as infrastructure: India must develop the right tech These laws and catchphrases obscure more than they reveal. The 'AI' of today bears little resemblance to what the phrase meant in the 1950s or even late 2022. The term 'transformer," the architecture that kicked off the modern AI boom, is a prime example. Its original 2017 equation exists now only in outline. The working internals of today's models—with flash attention, rotary embeddings and mixture-of-experts gating—have reshaped the original methods so thoroughly that the resulting equations resemble the original less than general relativity resembles Newton's laws. This linguistic mismatch will only worsen as robotics grafts cognition onto actuators and genomics borrows AI architecture for DNA editing. Our vocabulary, built for a slower era, struggles to keep up. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us Beneath the noise, a paradox remains: staying genuinely current is both exceedingly difficult and easier than ever. It's difficult because terminology changes weekly and breakthroughs appear on preprint servers, not in peer-reviewed journals. However, it's easier because we now have AI tools that can process vast amounts of information, summarize dense research and identify core insights with remarkable precision. Used well, these technologies can become the most effective way to understand technology itself. And that's how sensible investment in innovation begins: with a genuine grasp of what's being invested in. The author is a Singapore-based innovation investor for GenInnov Pte Ltd


Hindustan Times
an hour ago
- Hindustan Times
Women, be careful: 7 types of photos to avoid that can easily be ‘undressed' by AI dress changing apps
Be careful about what kind of photos you're putting out there. A full-length shot in a sharp business suit on LinkedIn can be turned into a bikini pic in seconds — no skill, no money, just one AI app that pops up on the first page of Google Search. The scary part is that these apps are now getting better in terms of accuracy. All it takes is a screenshot of one solo, full-body image. The threat is real. But there are ways to prevent your public photos from being misused by AI dress changing apps. (Photo generated by Google Gemini) Across India, women are discovering their social media photos are being misused by AI-powered 'dress changing' tools that digitally strip clothes and create fake nudes. No consent. No clue. Just a clear image and a screenshot. These apps are floating around on Telegram channels, and the dark web. Many free AI dress changing apps that 'gets-the-job-done' can be accessed through a simple Google Search for free. And they're not going after celebrities. They're targeting everyday women: students, professionals, influencers, even homemakers. Why blame just shady apps? Google has announced its own AI dressing changing app called 'Doppl'. This app is currently available in the US and does not support try-ons for shoes, swimwear, lingerie, or accessories, yet. All it takes is a screenshot of one solo, full-body image. That's enough for the algorithm to generate a fake lingerie or nude version that can look realistic, without the woman ever knowing. The threat is real. But there are ways to prevent your public photos from being misused. There are 7 common photo types these AI tools rely on — avoid them, and you make their job a hell of a lot harder for perverts. 7 types of photos you should avoid posting If you want to stay active on social media but reduce your risk, avoid uploading these photo styles: 1. Full-length, solo shots AI tools work best on clear, front-facing, full-body pictures — especially when you're alone in the frame. Stick to waist-up or group shots when possible. 2. Single-layer or body-hugging clothes Tight outfits or visible body outlines make it easier for AI to guess what's underneath. Layered clothing like jackets, dupattas, or scarves are harder to fake. 3. Photos with plain backgrounds Blank walls or clean backdrops are ideal for AI detection. Instead, use natural settings like streets, parks, or indoor scenes with visible objects. 4. Unedited high-resolution images Crystal-clear, high-res images give AI more data to process. Slightly reduce resolution or apply filters before posting. 5. Photos without watermarks A small watermark, even a light diagonal one with your name or username, can confuse AI and lower the chances of misuse. 6. Straight-on poses with arms by the side Straight posture with arms by your side makes it easy for AI to outline your figure. Natural poses — crossing your arms, holding a bag, or turning sideways — make manipulation harder. 7. Similar photos posted frequently Posting the same type of pose or angle often gives AI more data to learn from. Vary your photo style and frequency. Why AI dress-changing apps are dangerous These tools use machine learning to scan a photo, detect the body outline under clothes, and generate a manipulated version of the image — sometimes changing traditional Indian outfits into revealing bikinis, lingerie, or outright fake nudes. Unlike Photoshop, these apps don't need skill. All they need is the right kind of photo — which is where your caution comes in. What can Indian users do to keep a check India has one of the largest internet user bases, but awareness about image misuse is still low. Here's how you can stay ahead: