
Microsoft in advanced talks for continued access to OpenAI tech, Bloomberg News reports
The companies have discussed new terms that would allow Microsoft to use OpenAI's latest models and technology even if the ChatGPT maker declares it has achieved artificial general intelligence (AGI), or AI that surpasses human intelligence, the report said.
A clause in OpenAI's current contract with Microsoft will shut the software giant out of some rights to the startup's advanced technology when it achieves AGI.
Negotiators have been meeting regularly, and an agreement could come together in a matter of weeks, Bloomberg News reported.
Microsoft and OpenAI did not immediately respond to a Reuters request for comment.
OpenAI needs Microsoft's approval to complete its transition into a public-benefit corporation. The two have been in negotiations for months to revise the terms of their investment, including the future equity stake Microsoft will hold in OpenAI.
Last month, The Information reported that Microsoft and OpenAI were at odds over the AGI clause.
OpenAI is also facing a lawsuit from Elon Musk, who co-founded the company with Sam Altman in 2015 but left before it surged in popularity, accusing OpenAI of straying from its founding mission — to develop AI for the good of humanity, not corporate profit.
Microsoft is set to report June quarter earnings on Wednesday, with its relationship with OpenAI in the spotlight, as the startup turns to rivals Google (GOOGL.O), opens new tab, Oracle and CoreWeave (CRWV.O), opens new tab for cloud capacity.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
24 minutes ago
- Reuters
US agency approves OpenAI, Google, Anthropic for federal AI vendor list
WASHINGTON, Aug 5 (Reuters) - The U.S. government's central purchasing arm on Tuesday added OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude to a list of approved artificial intelligence vendors to speed use by government agencies. The move by the General Services Administration, allows the federal government advance adoption of AI tools by making them available for government agencies through a platform with contract terms in place. GSA said approved AI providers "are committed to responsible use and compliance with federal standards."


Reuters
27 minutes ago
- Reuters
Wells Fargo downgrades US small-cap equities to 'unfavourable'
Aug 5 (Reuters) - Wells Fargo Investment Institute on Tuesday downgraded U.S. small-cap equities to "unfavorable" from "neutral", pointing to its heavy tariff exposure and weak earnings.


Telegraph
27 minutes ago
- Telegraph
ChatGPT to stop advising users if they should break up with their boyfriend
ChatGPT is to stop telling people they should break up with their boyfriend or girlfriend. OpenAI, the Silicon Valley company that owns the tool, said the artifical intelligence (AI) chatbot would stop giving clear-cut answers when users type in questions for 'personal challenges'. The company said ChatGPT had given wayward advice when asked questions such as 'should I break up with my boyfriend?'. 'ChatGPT shouldn't give you an answer. It should help you think it through – asking questions, weighing pros and cons,' OpenAI said. The company also admitted that its technology 'fell short' when it came to recognising signs of 'delusion or emotional dependency'. ChatGPT has been battling claims that its technology makes symptoms of mental health illnesses such as psychosis worse. Chatbots have been hailed as offering an alternative to therapy and counselling, but experts have questioned the quality of the advice provided by AI psychotherapists. Research from NHS doctors and academics last month warned that the tool may be 'fuelling' delusions in vulnerable people, known as 'ChatGPT psychosis'. The experts said AI chatbots had a tendency to 'mirror, validate or amplify delusional or grandiose content' – which could lead mentally ill people to lose touch with reality. OpenAI has already been forced to tweak its technology after the chatbot became overly sycophantic – heaping praise and encouragement on users. The company added it would begin prompting users who had been spending excessive amounts of time talking to ChatGPT to take a break amid concerns that heavy AI use could be linked to higher levels of loneliness. In March, a study published by the Massachusetts Institute of Technology's Media Lab and researchers from OpenAI found that obsessive users of ChatGPT – who relied on it for emotional conversations – reported higher levels of loneliness. 'Higher daily usage – across all modalities and conversation types – correlated with higher loneliness, dependence and problematic use, and lower socialisation,' the researchers said. 'Those with stronger emotional attachment tendencies and higher trust in the AI chatbot tended to experience greater loneliness and emotional dependence.'