logo
Stargate partner Crusoe lands $750 million credit line for AI buildout

Stargate partner Crusoe lands $750 million credit line for AI buildout

CNBC11-06-2025

Cloud infrastructure startup Crusoe, which is helping to build OpenAI's Stargate data center project in Texas, said Wednesday that it has raised a $750 million credit line from Brookfield Asset Management.
The new debt will go toward data centers, Nvidia chips and electrical and power generation infrastructure, Crusoe CEO Chase Lochmiller said in an interview.
"We're in a very capex-heavy business, which requires having significant and very deep pools of capital to be able to build both what the world needs and what our customers are demanding from us," Lochmiller said.
Crusoe is collaborating on one of the largest hubs for running artificial intelligence models, which President Donald Trump announced in January. Over the course of four years, OpenAI and partners including Oracle plan to invest up to $500 billion to construct AI infrastructure.
In December, Crusoe touted $600 million in funding with participation from Fidelity, Mubadala and Nvidia. In March it announced a $225 million credit line from Upper90 Capital Management. Last month Crusoe, Blue Owl Capital and Primary Digital Infrastructure disclosed the second phase of a $15 billion joint venture to develop a data center in Abilene, Texas, that will host 50,000 Nvidia graphics processing units.
Crusoe is following the lead of other cloud providers focused on AI. Last year CoreWeave, which supplies OpenAI and its partner Microsoft, announced a $650 million credit line, in addition to billions in debt. CoreWeave went public earlier this year, and is currently valued at about $71 billion, having almost quadrupled since its IPO.
Unlike CoreWeave, Crusoe builds its own data centers, rather than leasing. Investors valued Crusoe at $2.8 billion in the company's funding round announced in December.
"A lot has happened since then," Lochmiller said.
In May CoreWeave reported financial results for the first time as a public company, revealing new business from Google and OpenAI and revenue growth of 420%. The company's net loss more than doubled to about $315 million.
"Just kind of seeing how public markets reacting to that has been quite positive, and I think that's very encouraging to us in terms of the equity value of our business," Lochmiller said.
In addition to CoreWeave, Crusoe faces competition from top cloud providers such as Amazon and smaller players like Lambda and Nebius.
Demand for ChatGPT and other OpenAI services remains feverish. On Monday OpenAI said it had reached $10 billion in annualized revenue when excluding one-time deals, up from $5.5 billion in 2024.
Founded in 2018, Crusoe is based in Denver, with about 800 employees. Clients include the Massachusetts Institute of Technology, Together AI and Windsurf. In March, Crusoe said Nydig plans to acquire the startup's bitcoin mining business.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

I turned off all AI features on my Pixel phone — and instantly regretted it
I turned off all AI features on my Pixel phone — and instantly regretted it

Android Authority

timean hour ago

  • Android Authority

I turned off all AI features on my Pixel phone — and instantly regretted it

Robert Triggs / Android Authority I had this realization — epiphany of sorts — that while we've become more conscious of generative AI tools like ChatGPT and Gemini, we often use AI much more than we actively perceive. Every app you touch on your phone has some kind of smarts and automation baked in. It's constantly learning from your patterns and improving in the background. That nudged me to experiment with becoming more intentional about these AI additions and disable them for a cleaner look and feel. No smart suggestions I mindlessly use, no Assistant to speak to, and no on-device smarts. All turned off. I enthusiastically planned to do this for a week, but I soon realized I was being too optimistic. What sounded like a solid digital detox plan turned into a quiet reckoning: my phone is a well-oiled system with subtle automations I don't think I can live without anymore. How smart do you like your smartphone to be? 0 votes I like my phone as basic as possible NaN % I like to balance — smart where needed NaN % Give me all the AI, everywhere NaN % This is the most digitally impaired I've felt Andy Walker / Android Authority I imagined turning off smart features across all my main apps would feel like going back to the good-old Nokia bar phone days. Nostalgia made that seem enticing — something I thought I'd actually want — but practically, it was far from rosy. The most frustrated I got during my time off AI was with Gboard. Without swipe typing, predictive text, and autocorrect — the very features we all love to meme about — my entire phone felt broken. The number and variety of misspellings I could come up with made me question my self-worth as a writer. And fixing each one of them made me a painfully slow typist. Group chats would often move on from a topic by the time I'd finished typing my take — total Internet Explorer–style late blooming. In Google Photos, edits became much more manual. While I enjoy playing with contrast and tone and whatnot myself, I really missed the one-tap fixes that helped with lighting and gave me a quick, clean version to share on Instagram or at least build on. More importantly, I couldn't use any of the smart editing features you get a Pixel for — Magic Editor, Photo Unblur, Best Take. Without them, it was like going back to the cave days of modern tech (2010, I mean). Ryan Haines / Android Authority Oh, and I had to completely disable Gemini/Google Assistant. I honestly felt like Joaquin Phoenix in Her, sorely missing his AI companion. I couldn't ask it to control smart home devices or help with Android Auto — everything became manual. I had to now type out my reminders, and changing music in the car turned into a dangerously distracting chore. That's when I noticed how often I absentmindedly said 'Ok Google' while walking around the house. I guess we've all been in the Her era all along without even realizing it. Quality Inferiority of life Andy Walker / Android Authority Beyond the big-ticket features I lost, I found myself stumbling without all the little ones, too. Without Pixel's Live Captions, I couldn't watch videos in noisy places and ended up saving them for later — not to consume more intentionally, but out of frustration. Gmail and Google Messages no longer suggested quick replies or helped finish my sentences. I had to type out full messages and emails like it was 2015. I noticed how often I absentmindedly said 'Ok Google' while walking around the house. I guess we've all been in the Her era all along without even noticing it. Maps stopped telling me when to leave home based on traffic, and it didn't remember my parking spot either. Once, I forgot where I'd parked because I didn't save the location manually. Google Photos stopped resurfacing old memories during the day — no surprise moments with friends, family, or random mountain dogs I clicked a decade ago. Not getting to see dog photos randomly is the lowest kind of inferiority in life. The good side of un-intelligification Ryan Whitwam / Android Authority Besides sparing me time to coin my own words, the lack of AI on my phone did help in a few ways. You must've already guessed the first one — battery life benefits. I couldn't track it rigorously since I had limited time with this setup, but the gains were in the 10–15% range, which was noticeably better than usual. More importantly, the phone just felt quieter. No unnecessary alerts, no screen lighting up every half hour with nudges I didn't need. It felt more analog — like a tool I controlled, not something that subconsciously controlled me. I picked it up when I needed to, not because I was tempted to see what was waiting for me. But was it enough to keep me on this routine? You already know the answer to this, too. I want all the AI magic back — right now Stephen Schenck / Android Authority That was me last weekend, soon after I started the experiment. The lack of AI smarts was annoying at first, then it got frustrating enough to slow down my regular day. Simple things took twice the time, especially without Gboard's assistive typing. And that's when it hit me that AI isn't just Gemini or the ChatGPT app. It's ambient. It works in the background, often silently, making tiny decisions and smoothing over rough edges without drawing attention to itself. Quiet enough to fade in the background — until you turn it all off. AI is ambient. It works in the background, often silently, making tiny decisions and smoothing over rough edges without drawing attention to itself. Hopefully, this little try-out gives you a good idea of why it's not worth trying for yourself. Convenience is the point of AI, and I'm all for it. Like I said, I lasted far fewer days than I'd planned. I remembered the exact sequence in which I turned everything off and flicked it all back on just as quickly. I want Photos to clean up distracting objects in my shots. I want the Assistant to find my playlist while I'm driving. And I absolutely cannot live without Gboard's smarts. So yes, I'm back to using my smart-phone the way it was meant to be — smartly.

Your member of Congress might be using ChatGPT
Your member of Congress might be using ChatGPT

Business Insider

timean hour ago

  • Business Insider

Your member of Congress might be using ChatGPT

In December, Rep. Thomas Massie used an analogy for foreign aid that was an instant hit among his libertarian and America-First Republican fans. "US foreign aid spending is like watering the neighbor's yard while your house is on fire," the Kentucky Republican posted on X, adding a fire emoji. Fox News wrote an article about it, and two months later, the libertarian student group "Young Americans for Liberty" turned it into an Instagram post. As it turns out, Massie didn't come up with the line himself. Grok did. Massie told BI this month that he ripped the phrase from a speech he asked the xAI-developed chatbot to generate using his voice. He said he's done this more than once. "Out of five paragraphs, I'll find one sentence that's good," Massie said. "But it makes it worth doing." Leaning on AI for speechwriting is an apparently bipartisan affair on Capitol Hill. "I'll type in some phrases and say, can we make this more punchy?" Democratic Rep. Ro Khanna of California told BI, adding that he began using ChatGPT "almost like an editor" in the last year. "There was some speech I gave where it edited in a couple of lines that people thought, 'Wow, that's really good,'" Khanna said. Congress has developed a reputation for lagging behind the public when it comes to adopting new technology. Plenty of lawmakers told BI that they have yet to get into using AI, either because they're skeptical that it will be useful for them or they just haven't gotten around to it. But several lawmakers have begun to casually adopt the technology, most often as a search engine and research tool. Khanna said he uses both ChatGPT and Grok, turning to the technology "two to three times per day." Massie, who uses Grok because of its convenient placement within the X app, said he uses the chatbot for "anything." 'Impressively good at certain things and pretty miserable at some things' As Sen. Ron Johnson of Wisconsin has waged a fight to make deeper cuts to federal spending as part of the "Big Beautiful Bill," he's been consulting with Grok. "I got up at 3 o'clock in the morning with an idea to use it," the Wisconsin Republican told BI in early June. He said the technology's been useful for running the numbers on the bill's impact on the deficit and to find documents that support his arguments. "It's really great at identifying sources without me having to crawl around in government forms." In some ways, members of Congress are just doing what other Americans are doing. More and more people are using AI at work, according to a recent Gallup poll, with 40% of employees saying they use it a few times per year. Another 19% say they use it frequently, while 8% say they use it on a daily basis. Sen. Ted Cruz of Texas, a champion of a controversial provision in the "Big Beautiful Bill" that would restrict state's ability to regulate AI for 10 years, told BI that while he "would not claim to be a sophisticated AI user," he's been using ChatGPT as an "enhanced search engine." Cruz said he recently asked an AI chatbot about his own record, when he "could not remember when I had first taken a public position" on a particular policy area. "It gave a very thorough answer, going back to an interview I'd done in 2012 and a comment I'd made in 2014," Cruz said. "That research previously would have required some staff assistance, spending hours and hours, and you still wouldn't have found anything." Large language models like ChatGPT and Grok are known to sometimes present false information as fact — known as "hallucinating." For Democratic Sen. Elizabeth Warren of Massachusetts, that's enough to discourage her from using it. "It lies," Warren told BI. "I've tried using it, and it gets things wrong that I already know the answer to. So when I see that, I've lost all confidence." Democratic Sen. Chris Murphy of Connecticut said he's tried ChatGPT and has been disappointed by its apparent limitations, even when carrying out more basic tasks. In one instance, Murphy said he asked ChatGPT to generate a list of his Democratic colleagues ordered alphabetically by first name, only for it to include retired senators. "It seems to be impressively good at certain things and pretty miserable at some things," Murphy said. Even those who are otherwise fans of the technology said they're aware that they could be getting fed incorrect information. "My chief of staff has astutely warned me that AI is often confidently wrong," Johnson said. "So you really have to be careful in how you phrase your questions." "It definitely hallucinates on you," Massie said. "It told me there was a Total Wine and More in Ashland, Kentucky, and no such thing exists."

The AI Mental Health Market Is Booming — But Can The Next Wave Deliver Results?
The AI Mental Health Market Is Booming — But Can The Next Wave Deliver Results?

Forbes

timean hour ago

  • Forbes

The AI Mental Health Market Is Booming — But Can The Next Wave Deliver Results?

AI tools promise scalable mental health support, but can they actually deliver real care, or just ... More simulate it? In April of 2025, Amanda Caswell found herself on the edge of a panic attack one midnight. With no one to call and the walls closing in, she opened ChatGPT. As she wrote in her piece for Tom's Guide, the AI chatbot calmly responded, guiding her through a series of breathing techniques and mental grounding exercises. It worked, at least in that moment. Caswell isn't alone. Business Insider reported earlier that an increasing number of Americans are turning to AI chatbots like ChatGPT for emotional support, not as a novelty, but as a lifeline. A recent survey of Reddit users found many people report using ChatGPT and similar tools to cope with emotional stress. These stats paint a hopeful picture: AI stepping in where traditional mental health care can't. But they also raise a deeper question about whether these tools are actually helping. A Billion-Dollar Bet On Mental Health AI AI-powered mental health tools are everywhere — some embedded in employee assistance programs, others packaged as standalone apps or productivity companions. In the first half of 2024 alone, investors poured nearly $700 million into AI mental health startups globally, the most for any digital healthcare segment, according to Rock Health. The demand is real. Mental health conditions like depression and anxiety cost the global economy more than $1 trillion each year in lost productivity, to the World Health Organization. And per data from the CDC, over one in five U.S. adults under 45 reported symptoms in 2022. Yet, many couldn't afford therapy or were stuck on waitlists for weeks — leaving a care gap that AI tools increasingly aim to fill. Companies like are trying to do just that. Founded by Sarah Wang — a former Meta and TikTok tech leader who built AI systems for core product and global mental health initiatives — BlissBot blends neuroscience, emotional resilience training and AI to deliver what she calls 'scalable healing systems.' 'Mental health is the greatest unmet need of our generation,' Wang explained. 'AI gives us the first real shot at making healing scalable, personalized and accessible to all.' She said Blissbot was designed from scratch as an AI-native platform, a contrast to existing tools that retrofit mental health models into general-purpose assistants. Internally, the company is exploring the use of quantum-inspired algorithms to optimize mental health diagnostics, though these early claims have not yet been peer-reviewed. It also employs privacy-by-design principles, giving users control over their sensitive data. Sarah Wang- Founder, Blissbot 'We've scaled commerce and content with AI,' Wang added. 'It's time we scale healing.' Blissbot isn't alone in this shift. Other companies, like Wysa, Woebot Health and Innerworld, are also integrating evidence-based psychological frameworks into their platforms. While each takes a different approach, they share the common goal of delivering meaningful mental health outcomes. Why Outcomes Still Lag Behind Despite the flurry of innovation, mental health experts caution that much of the AI being deployed today still isn't as effective as claimed. 'Many AI mental health tools create the illusion of support,' said Funso Richard, an information security expert with a background in psychology. 'But if they aren't adaptive, clinically grounded and offer context-aware support, they risk leaving users worse off — especially in moments of real vulnerability.' Even when AI platforms show promise, Richard cautioned that outcomes remain elusive, noting that AI's perceived authority could mislead vulnerable users into trusting flawed advice, especially when platforms aren't transparent about their limitations or aren't overseen by licensed professionals. Wang echoed these concerns, citing a recent Journal of Medical Internet Research study that pointed out limitations in the scope and safety features of AI-powered mental health tools. The regulatory landscape is also catching up. In early 2025, the European Union's AI Act classified mental health-related AI as 'high risk,' requiring stringent transparency and safety measures. While the U.S. has yet to implement equivalent guardrails, legal experts warn that liability questions are inevitable if systems offer therapeutic guidance without clinical validation. For companies rolling out AI mental health benefits as part of diversity, equity, inclusion (DEI) and retention strategies, the stakes are high. No If tools don't drive outcomes, they risk becoming optics-driven solutions that fail to support real well-being. However, it's not all gloom and doom. Used thoughtfully, AI tools can help free up clinicians to focus on deeper, more complex care by handling structured, day-to-day support — a hybrid model that many in the field see as both scalable and safe. What To Ask Before Buying Into The Hype For business leaders, the allure of AI-powered mental health tools is clear: lower costs, instant availability and a sleek, data-friendly interface. But adopting these tools without a clear framework for evaluating their impact can backfire. So what should companies be asking? Before deploying these tools, Wang explained, companies should interrogate the evidence behind them. 'Are they built on validated frameworks like cognitive behavioral therapy (CBT) or acceptance and commitment therapy (ACT), or are they simply rebranding wellness trends with an AI veneer?,' she questioned. 'Do the platforms measure success based on actual outcomes — like symptom reduction or long-term behavior change — or just logins? And perhaps most critically, how do these systems protect privacy, escalate crisis scenarios and adapt across different cultures, languages, and neurodiverse communities?' Richard agreed, adding that 'there's a fine line between offering supportive tools and creating false assurances. If the system doesn't know when to escalate — or assumes cultural universality — it's not just ineffective. It's dangerous.' Wang also emphasized that engagement shouldn't be the metric of success. 'The goal isn't constant use,' she said. 'It's building resilience strong enough that people can eventually stand on their own.' She added that the true economics of AI in mental health don't come from engagement stats. Rather, she said, the show up later — in the price we pay for shallow interactions, missed signals and tools that mimic care without ever delivering it. The Bottom Line Back in that quiet moment when Caswell consulted ChatGPT during a panic attack, the AI didn't falter. It guided her through that moment like a human therapist would. However, it also didn't diagnose, treat, or follow up. It helped someone get through the night — and that matters. But as these tools become part of the infrastructure of care, the bar has to be higher. As Caswell noted, 'although AI can be used by therapists to seek out diagnostic or therapeutic suggestions for their patients, providers must be mindful of not revealing protected health information due to HIPAA requirements.' That's especially because scaling empathy isn't just a UX challenge. It's a test of whether AI can truly understand — not just mimic — the emotional complexity of being human. For companies investing in the future of well-being, the question isn't just whether AI can soothe a moment of crisis, but whether it can do so responsibly, repeatedly and at scale. 'That's where the next wave of mental health innovation will be judged,' Wang said. 'Not on simulations of empathy, but on real and measurable human outcomes.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store