logo
OpenAI Seeks Additional Capital From Investors as Part of Its $40 Billion Round

OpenAI Seeks Additional Capital From Investors as Part of Its $40 Billion Round

WIRED3 days ago
Jul 22, 2025 2:23 PM OpenAI, which recently announced a $40 billion round of financing, is seeking funding from new and existing investors to fulfill the deal. CEO of OpenAI Sam Altman speaks to members of the media as he arrives at the Sun Valley lodge for the Allen & Company Conference on July 8, 2025. Photograph:OpenAI is seeking capital from new and existing investors, two people familiar with the company's plans tell WIRED. The fundraising effort is part of a $40 billion round announced in March. The round will reopen on Monday, July 28, according to one of the sources, who has direct knowledge of the fundraising effort.
The $40 billion round announced earlier this year brought OpenAI's valuation up to $300 billion, making it one of the most highly valued private startups in history. The round was led by Japanese investment conglomerate SoftBank, which committed to contributing 75 percent of the total funding. The initial tranche was $10 billion, with $7.5 billion from SoftBank and another $2.5 billion from a syndicate of other investors. OpenAI is currently raising the final $30 billion, with $22.5 from SoftBank and $7.5 from a syndicate of other investors.
SoftBank's commitment could be slashed to $10 billion if OpenAI does not restructure by the end of the year, WIRED confirmed.
OpenAI declined to comment on the record.
OpenAI has raised a total of $63.92 billion since the company was founded in 2015, according to PitchBook. Its backers include a wide range of institutional and individual investors, including Microsoft, Andreessen Horowitz, Sequoia Capital, Founders Fund, Thrive Capital, Coatue Management, Nvidia, and Reid Hoffman. Microsoft and OpenAI's relationship is closely intertwined, with Microsoft providing OpenAI with massive amounts of cloud computing resources and OpenAI giving Microsoft exclusive access to its best models—though it was recently reported that their relationship has complications.
OpenAI has also partnered with SoftBank, among others, on a four-year AI data center project in which upwards of $500 billion is projected to be invested. The Wall Street Journal reported earlier this week that the two entities have been at odds over certain aspects of the partnership, including where to build the data centers, and that OpenAI CEO Sam Altman has been making moves to sign deals for Stargate-aligned data centers without the Japanese firm.
SoftBank declined to comment on the record.
OpenAI's company structure has also been a point of contention, and has rankled Elon Musk, who helped launch the research lab with a mission to safeguard humanity against artificial general intelligence, or AGI. After Musk left the company's board in early 2018,OpenAI created a for-profit arm, in part to make it easier to fundraise. Last year Musk sued OpenAI for allegedly abandoning its original mission and said the company is 'not just developing but is refining an AGI [Artificial General Intelligence] to maximize profits for Microsoft, rather than for the benefit of humanity.'
In May, OpenAI proposed a new structure that keeps the non-profit in control of the company, and turns its current for-profit subsidiary into a public benefit corporation. This new non-profit would hold shares in the PBC, and the PBC would in theory be designed to prioritize returns for shareholders while also pursuing projects with clear public benefits. SoftBank's investment in OpenAI is contingent on this new structure being approved by attorneys general in California and in Delaware by early next year.
Additional reporting by Kylie Robison and Zoë Schiffer.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why Do Some AI Models Hide Information From Users?
Why Do Some AI Models Hide Information From Users?

Time Business News

time2 minutes ago

  • Time Business News

Why Do Some AI Models Hide Information From Users?

In today's fast-evolving AI landscape, questions around transparency, safety, and ethical use of AI models are growing louder. One particularly puzzling question stands out: Why do some AI models hide information from users? Building trust, maintaining compliance, and producing responsible innovation all depend on an understanding of this dynamic, which is not merely academic for an AI solutions or product engineering company. Using in-depth research, professional experiences, and the practical difficulties of large-scale AI deployment, this article will examine the causes of this behavior. AI is an effective instrument. It can help with decision-making, task automation, content creation, and even conversation replication. However, enormous power also carries a great deal of responsibility. The obligation at times includes intentionally hiding or denying users access to information. Let's look into the figures: Over 4.2 million requests were declined by GPT-based models for breaking safety rules, such as requests involving violence, hate speech, or self-harm, according to OpenAI's 2023 Transparency Report. Concerns about 'over-blocking' and its effect on user experience were raised by a Stanford study on large language models (LLMs), which found that more than 12% of filtered queries were not intrinsically harmful but were rather collected by overly aggressive filters. Research from the AI Incident Database shows that in 2022 alone, there were almost 30 cases where private, sensitive, or confidential information was inadvertently shared or made public by AI models. At its core, the goal of any AI model—especially large language models (LLMs)—is to assist, inform, and solve problems. But that doesn't always mean full transparency. Large-scale datasets, such as information from books, websites, forums, and more, are used to train AI models. This training data can contain harmful, misleading, or outright dangerous content. So AI models are designed to: Avoid sharing dangerous information like how to build weapons or commit crimes. like how to build weapons or commit crimes. Reject offensive content , including hate speech or harassment. , including hate speech or harassment. Protect privacy by refusing to share personal or sensitive data. by refusing to share personal or sensitive data. Comply with ethical standards, avoiding controversial or harmful topics. As an AI product engineering company, we often embed guardrails —automatic filters and safety protocols—into AI systems. They are not arbitrary; they are required to prevent misuse and follow rules. Expert Insight: In projects where we developed NLP models for legal tech, we had to implement multi-tiered moderation systems that auto-redacted sensitive terms—this is not over-caution; it's compliance in action. In AI, compliance is not optional. Companies building and deploying AI must align with local and international laws, including GDPR and CCPA —privacy regulations requiring data protection. —privacy regulations requiring data protection. COPPA — Preventing AI from sharing adult content with children. — AI from sharing adult content with children. HIPAA—Safeguarding health data in medical applications. These legal boundaries shape how much an AI model can reveal. For example, a model trained in healthcare diagnostics cannot disclose medical information unless authorized. This is where AI solutions companies come in—designing systems that comply with complex regulatory environments. Some users attempt to jailbreak AI models to make them say or do things they shouldn't. To counter this: Models may refuse to answer certain prompts . . Deny requests that seem manipulative. that seem manipulative. Mask internal logic to avoid reverse engineering. As AI becomes more integrated into cybersecurity, finance, and policy applications, hiding certain operational details becomes a security feature, not a bug. Although the intentions are usually good, there are consequences. Many users, including academic researchers, find that AI models Avoid legitimate topics under the guise of safety. under the guise of safety. Respond vaguely , creating unproductive interactions. , creating unproductive interactions. Fail to explain 'why' an answer is withheld. For educators or policymakers relying on AI for insight, this lack of transparency can create friction and reduce trust in the technology. Industry Observation: In an AI-driven content analysis project for an edtech firm, over-filtering prevented the model from discussing important historical events. We had to fine-tune it carefully to balance educational value and safety. If an AI model refuses to respond to a certain type of question consistently, users may begin to suspect: Bias in training data Censorship Opaque decision-making This fuels skepticism about how the model is built, trained, and governed. For AI solutions companies, this is where transparent communication and explainable AI (XAI) become crucial. So, how can we make AI more transparent while keeping users safe? Models should not just say, 'I can't answer that.' They should explain why, with context. For instance: 'This question may involve sensitive information related to personal identity. To protect user privacy, I've been trained to avoid this topic.' This builds trust and makes AI systems feel more cooperative rather than authoritarian. Instead of blanket bans, modern models use multi-level safety filters. Some emerging techniques include: SOFAI multi-agent architecture : Where different AI components manage safety, reasoning, and user intent independently. : Where different AI components manage safety, reasoning, and user intent independently. Adaptive filtering : That considers user role (researcher vs. child) and intent. : That considers user role (researcher vs. child) and intent. Deliberate reasoning engines: They use ethical frameworks to decide what can be shared. As an AI product engineering company, incorporating these layers is vital in product design—especially in domains like finance, defense, or education. AI developers and companies must communicate. What data was used for training What filtering rules exist What users can (and cannot) expect Transparency helps policymakers, educators, and researchers feel confident using AI tools in meaningful ways. Recent work, like DeepSeek's efficiency breakthrough, shows how rethinking distributed systems for AI can improve not just speed but transparency. Mixture-of-Experts (MoE) architectures were used by DeepSeek to cut down on pointless communication. This also means less noise in the model's decision-making path—making its logic easier to audit and interpret. Traditional systems often fail because they try to fit AI workloads into outdated paradigms. Future models should focus on: Asynchronous communication Hierarchical attention patterns Energy-efficient design These changes improve not just performance but also trustworthiness and reliability, key to information transparency. If you're in academia, policy, or industry, understanding the 'why' behind AI information hiding allows you to: Ask better questions Choose the right AI partner Design ethical systems Build user trust As an AI solutions company, we integrate explainability, compliance, and ethical design into every AI project. Whether it's conversational agents, AI assistants, or complex analytics engines, we help organizations build models that are powerful, compliant, and responsible. In conclusion, AI models hide information for safety, compliance, and security reasons. However, trust can only be established through transparency, clear explainability, and a strong commitment to ethical engineering. Whether you're building products, crafting policy, or doing research, understanding this behavior can help you make smarter decisions and leverage AI more effectively. If you're a policymaker, researcher, or business leader looking to harness responsible AI, partner with an AI product engineering company that prioritizes transparency, compliance, and performance. Get in touch with our AI solutions experts, and let's build smarter, safer AI together. Transform your ideas into intelligent, compliant AI solutions—today. TIME BUSINESS NEWS

A new type of dealmaking is unnerving startup employees. Here are the questions to ask to make sure you don't get left out.
A new type of dealmaking is unnerving startup employees. Here are the questions to ask to make sure you don't get left out.

Business Insider

timean hour ago

  • Business Insider

A new type of dealmaking is unnerving startup employees. Here are the questions to ask to make sure you don't get left out.

As a new kind of dealmaking is sweeping Silicon Valley, forcing employees to be vigilant about how much trust they are willing to put in startup founders. Over the past two years, instead of acquiring AI startups outright, Big Tech companies have been licensing their technology or making deals for top talent, with startup employees sometimes getting divided into separate camps of haves and have-nots. Those with the most desirable AI skills reap a windfall while those who remain are shrouded in uncertainty. That recently happened to Windsurf employees after the AI coding company was on the verge of being acquired by OpenAI for $3 billion, but was instead split in half. Google paid billions to hire Windsurf's CEO and top talent, and the hundreds of employees who remained were bought by another startup, Cognition. Unfortunately for startup employees, many investors expect these kinds of novel transactions to continue as the velocity of developments in AI makes companies unlikely to want to wait months or years for regulatory approval. Candidates need to ask tough questions about the founder Given traditional M&A has mostly gone out the window, it is more important than ever for startup employees to do their homework, advises Steve Brotman, managing partner at Alpha Partners. "In light of what we just saw with Windsurf, it's crucial to understand the ownership dynamics," Brotman said. " You don't want to be working 100-hour weeks only to realize your options are underwater or your exit upside is capped. And remember: companies that are transparent and deliberate about governance tend to be better long-term bets, both for your career and your equity." "Ask hard questions about runway, revenue, burn, and investor syndicate quality," Brotman continued. "Who's on the board? Are they structured for long-term growth or a quick flip?" The most important thing candidates should assess is how much they trust the founder, according to Deedy Das, an investor at Menlo Ventures. "Nobody wants to talk about the fact that founders control almost everything that happens in a company, including how you get paid, when you get paid, how the equity vests, and when you can sell the equity," said Das. "It's everything, so having trust in your founder to do the right thing by the team is extremely important." Just as investors would typically research a founder before writing a check to one of their many portfolio companies, prospective employees should ask around about founders whom they could be tied to for years, said Hari Raghavan, cofounder and CEO of Autograph. "They should be doing diligence on whether this is a standup person," said Raghavan. "Do your best to suss out, 'Are these guys going to take care of me?'" Raghavan suggests that founders should sign a written pledge agreeing to treat employees well in terms of stock options and exit scenarios. "These are things that any good founder should be doing, and the vast majority of good ones do, but I think even just establishing that set of rules is a good idea," he said. Prospective employees should not be afraid to "interrogate" a founder on how they are thinking about an exit, according to Jake Saper, a general partner at Emergence Capital. "Ask founders how they would weigh staying independent, a classic acquisition, or a licensing deal that carves out key people," Saper said. "Their answer tells you a lot about the journey you're signing up for." Scrutinizing the fine print has also become more important, said Saper. "Make sure offer letters and stock agreements spell out vesting acceleration, treatment of options, and retention bonuses if only 'substantially all' of the team moves," Saper said. "Those clauses mattered at Inflection and Windsurf, and they will matter again." In 2024, Microsoft hired the founder of Inflection AI, Mustafa Suleyman, and some of the startup's staff to help lead its AI efforts. In June, Meta paid $14 billion for a 49 percent stake in the data labeling company Scale AI and hired its founder, Alexandr Wang, to run its Superintelligence group. Meta also hired some of the startup's researchers. Last week, Scale AI laid off 14% of its workforce, or 200 employees, and revealed it is unprofitable. Finally, Saper says to take a hard look at the underlying business model of a startup to make sure it can last. "Startups with unique data feeds, embedded distribution or clear recurring revenue have leverage to stay independent," Saper said. "If a company's main asset is a brilliant but portable research team, you should assume Big Tech will come knocking."

Open AI's GPT-5 Launch Is Just Days Away, Claims Report
Open AI's GPT-5 Launch Is Just Days Away, Claims Report

Business Insider

time4 hours ago

  • Business Insider

Open AI's GPT-5 Launch Is Just Days Away, Claims Report

Artificial Intelligence pioneer OpenAI is set to launch its ground-breaking new GPT-5 model in the next few weeks. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Powerful System According to a report in The Verge, OpenAI's chief executive Sam Altman, below, plans to launch the model early next month. The model, which is understood to combine OpenAI's o-series and GPT-series into a single, powerful system, was expected to be launched this summer. It is understood that the delay was due to the need for additional testing. The new model will reportedly be 'positioned as an AI system that incorporates distinct models and can perform different functions as opposed to just a single AI model.' The Microsoft (MSFT) -backed startup did not comment, although as reported by TipRanks earlier this year the company is keen to simplify its offerings. Altman has said that this isn't just about making an AI that can handle more tasks; it's about creating one that thinks more deeply before it responds. This means we can expect future versions to not only be more efficient but also to offer richer, more thoughtful interactions. Reasoning Skills In addition, GPT-5 is more than just a simple update. It brings in the reasoning skills from the o-series, particularly the o3 model, which means it's going to be smarter and more aware of the context it's working in. This should help cut down on errors and boost performance. The essence of the system is that it should be a smarter version of what we have seen before and make it easier and more useful for users from employees to students or ordinary folk. In fact, Altman wants to make a free copy of GPT-5 available to everybody. 'I am very interested in what it means to give everybody on Earth a free copy of GPT-5, running for them all the time,' he said. But let's not get too carried away. The Verge report did have some caveats. 'While GPT-5 looks likely to debut in early August, OpenAI's planned release dates often shift to respond to development challenges, server capacity issues, or even rival AI model announcements and leaks,' the report said. TipRanks comparison tool.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store