
Data center boom may end up being "irrational," investor warns
Why it matters: Big Tech has been investing billions of dollar into data centers and energy sources to power them.
Just this week, Meta announced a deal to buy the power from an operating traditional nuclear station in Illinois that was set to retire in 2027.
Zoom in: Speaking at Axios' AI Summit in New York, Lux Capital co-founder and partner Josh Wolfe compared the build-out of data center infrastructure to previous bubbles in fiber-optic networking and cloud computing.
"I think that you're going to have the same phenomenon now," said Wolfe, whose firm Lux backs deeptech and science startups across sectors like AI, defense, and biotech.
What any one individual hyperscaler is doing to build out infrastructure is rational, but "collectively becomes irrational," said Wolfe. "It will not necessarily persist."
The intrigue: Wolfe raised a flag specifically on the build-out of the power infrastructure for these data centers.
"One take that is related to that is the demands for energy, which is presumed that, because you need all these data centers, then you need small modular reactors, and so you're getting speculative capital that's going into the energy provision therein," Wolfe said.
"So I think that that whole thing is going to end in disaster, mostly because as cliched as it is, history doesn't repeat. It rhymes."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
24 minutes ago
- Bloomberg
Meta Adds Startup Founder Gross to New AI Superintelligence Lab
Daniel Gross, the former chief executive officer and co-founder of artificial intelligence startup Safe Superintelligence Inc., is joining Meta Platforms Inc.'s new superintelligence lab focused on AI. Gross will work on AI products for the superintelligence group, according to his spokesperson, Lulu Meservey. Meta just restructured its AI unit and has gone on a major hiring spree to recruit industry experts to develop AI technology that will match or exceed human-level competency, known as superintelligence.


USA Today
2 hours ago
- USA Today
Chatbot therapy? Available 24/7 but users beware
On a special episode (first released on July 3, 2025) of The Excerpt podcast: Chatbots are sometimes posing as therapists—but are they helping or causing harm? Psychologist Vaile Wright shares her thoughts. Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text. Dana Taylor: Hello, I'm Dana Taylor, and this is a special episode of The Excerpt. The proliferation of chatbots has people using them in a myriad of ways. Some see them as friends and confidants, as Meta CEO Mark Zuckerberg has suggested. And in certain cases, even as therapists. And actual therapists are expressing concern. Therapy is a licensed profession for many good reasons. Notably, some chatbots have wandered into dangerous territory, allegedly suggesting that a user kill themselves and even telling them how they could do it. The American Psychological Association has responded by asking the Federal Trade Commission to start investigating chatbots that claim to be mental health professionals. Still, with mental health a rising issue and loneliness and epidemic, could bots help with the lack of supply with proper oversight or warnings? Vaile Wright, Senior Director of Healthcare Innovation at the American Psychological Association, is here to unpack what's happening for human therapists as they fight an onslaught of AI therapy impersonators. Vaile, thank you for joining me. Vaile Wright: Thanks so much for having me. Dana Taylor: Can you set the stage here? Your organization's chief executive cited two court cases when he presented to a Federal Trade Commission panel about the concerns of professional psychologists. What are the real life harms he pointed to? Vaile Wright: I think we see a future where you're going to have AI mental health chatbots that are rooted in psychological science, have been rigorously tested or co-created with experts for the purpose of addressing mental health needs. But that's not what's currently available on the market. What is available are these chatbots that click none of those boxes, but are being used by people to address their mental well-being. And the challenge is that because these AI chatbots are not being monitored by humans who know what good mental health care is, they go rogue and they say very harmful things. And people have a tendency to have an automation bias, and so they trust the technology over their own gut. Dana Taylor: What do these cases show about what could occur when AI chatbots moonlight as licensed therapists? Vaile Wright: When these chatbots refer to themselves as psychologists or therapists, they are presenting a certain level of credibility that doesn't actually exist. There is no expert behind these chatbots offering what we know is good psychological science. Instead, where the expertise lies is actually on the back end, where these chatbots are developed by coders to be overly validating to just tell the person exactly what they want to hear and be appealing to the point of almost being sycophantic. And that's the opposite of what therapy is. Yes, I want to validate as a therapist, but I'm also there to help point out when you're engaging in unhelpful thinking or behaviors, and these chatbots just don't do that. They, in fact, encourage some of that unhelpful, unhealthy behavior. Dana Taylor: Experts have described AI-powered chatbots as simply following patterns, and there's been conversation around chatbots telling users what they want to hear, being overly complimentary, as you've said. At worst, the response can be downright dangerous, like encouraging illicit drug use or as I mentioned in the intro, encouraging someone to take their own lives and then suggesting how they do that. Given all that, what are some of the regulations that professionals in your community would like to see? Is there a way for chatbots to responsibly help with therapy? Vaile Wright: I think that there is a way for chatbots to responsibly help with therapy. In certain cases, I think at a very minimum, these chatbots should not be allowed to refer to themselves as a licensed professional, not just as a licensed psychologist. We wouldn't want them to present themselves as a licensed attorney or a licensed CPA and offering advice. So I think that's at a minimum. I think we need more disclaimers that these are not humans. I think just saying it once to a consumer is just not sufficient. I think that we need some surveillance of the types of chats that's happening, particularly having to report out by these companies when they're noticing harmful discussions around suicidal ideation or suicidal behavior or violence of that type. So I think there are a variety of different things that we could see happening, but we need probably some regulatory body to insist that these companies do it. Dana Taylor: Are there any other protections proposed by the AI companies themselves that you see as having merit? Vaile Wright: I think because of this increased attention on how these chatbots are operating, you are seeing some changes around it, maybe age verification or offering resources like 911 or 988 pop up when they detect something that maybe is unhelpful, but I think they need to go even further. Dana Taylor: For young people in particular when using a chatbot, it can be difficult to recognize that they're dealing with a chatbot to begin with. Will it continue to get more difficult as the tech evolves, and does that mean it could be more dangerous for young people in the years to come? Vaile Wright: It's clear that the technology is getting more and more sophisticated, and it is really challenging I think for everybody to really be able to tell that these are not humans. They are built to sound and respond like humans. And with younger people who maybe are just more emotionally vulnerable, are also not as developmentally long in terms of their cognition and their, again, sense of being able to listen to your own gut, I do get worried that these digital natives, who have been interacting seamlessly with technology since the beginning, are just not going to be able to discern when the technology is going rogue or being truly harmful. Dana Taylor: Vaile, depending on where a patient lives or for other reasons, there can be a long wait list to see a therapist. Are there are some benefits that a bot can provide due to the fact that it's not human and is virtually available 24/7? Vaile Wright: Again, I think bots that are going to be developed for these purposes can be immensely helpful. And in fact, some of the bots that currently exist we do know anecdotally have had benefits. So for example, if it's 2:00 in the morning and I'm experiencing distress, even if I had a therapist, I can't call them at 2:00 in the morning. But if I had a chatbot that could provide me with some support, maybe encourage some strong healthy coping skills, I do see some benefit in that. We've also heard from the neurodivergent community that these chatbots provide them an opportunity to practice their social skills. So I think knowing that these can have some benefit, how do we capitalize on ensuring that whatever emerging technologies we build and offer are safe and effective because we can't just keep doing therapy with one model. We can't expect everybody to be able to see a face-to-face individual on a weekly basis because the supply is just too insufficient. So we have to think outside the box. Dana Taylor: Are you aware of human therapists that are joining forces today with chatbots to meet this overwhelming need for therapy? Vaile Wright: Yeah. Subject matter experts, whether it's psychologists or other therapists, play a critical role in ensuring that these technologies are safe and effective. There was a new study that came out of Dartmouth recently that looked at a mental health therapy chatbot called Therabot that, again, showed some really strong outcomes in improving depression, anxiety, and eating disorders. And that's an example of how you bring the researchers and the technologists together to develop products that are safe, effective, responsible, and ethical. Dana Taylor: Some high school counselors are providing chatbots to answer students' questions. Some see it as filling a gap. But does this prevent young people from social capital, the ties in human interaction, that can often make anyone feel more connected to others, their community, and therefore less alone? Vaile Wright: It's clear that young people are feeling disconnected and lonely. We did a survey recently where 71% of 18 to 34 year olds said that they don't feel like they can talk about their stress with others because they don't want to burden people. So how do we take that understanding and recognize why people are using these chatbots to fill these gaps while also helping people really appreciate the value of human connection? I don't want the conversation to always be AI versus humans. It's really about what does AI do really well, what do humans do really well, and how can we capitalize on both of those things together to help people reduce their suffering faster? Dana Taylor: What's the biggest takeaway that you'd like people to walk away with when it comes to chatbots and therapy? Vaile Wright: AI isn't going anywhere. People for centuries have always tried to seek out self-help ways to address their emotional well-being. That used to be Google docking doctor. Now it's chatbots. So we can't stop people from using them. And as we talked about, there could be some benefits to it, but how do we help consumers understand that there may be better options out there, better chatbot options even, and helping them be more digitally literate to understand when a particular chatbot maybe is not only just not being helpful, but actually harmful. Dana Taylor: Vaile, thank you for being on The Excerpt. Vaile Wright: Thanks so much for having me. Dana Taylor: Thanks for our senior producers Shannon Ray Green and Kaylee Monahan for their production assistance. Our executive producers Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@ Thanks for listening. I'm Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.
Yahoo
2 hours ago
- Yahoo
Oil Prices Lower on a Report US-Iran Nuclear Talks Will Restart
August WTI crude oil (CLQ25) on Thursday closed down -0.45 (-0.67%), and August RBOB gasoline (RBQ25) closed down -0.0043 (-0.20%). Oil prices on Thursday fell after Axios reported that the US plans to restart nuclear talks with Iran, which could eventually lead to reduced sanctions and increased Iranian oil exports. Nuclear talks might also stave off any new military attack by Israel on Iran. Additionally, the oil markets are nervous heading into this Sunday's OPEC+ meeting, which is expected to result in a decision for increased production. Nat-Gas Prices Rebound on the Outlook for a Smaller-Than-Average EIA Inventory Build Crude Oil Prices Supported by Middle East Tensions Energy Demand Optimism Pushes Crude Prices Sharply Higher Markets move fast. Keep up by reading our FREE midday Barchart Brief newsletter for exclusive charts, analysis, and headlines. Oil prices also had support from a new wildfire near a major oil sands field in the Fort McMurray area, which reminded the markets of the vulnerability of Canadian oil production during wildfire season. The oil market shrugged off the supportive June US payroll report, which showed a gain of +147,000, and the -0.1 percentage point decline in the June unemployment rate to 4.1%. Concern about a global oil glut is negative for crude prices. Last Wednesday, Russia stated that it is open to another output hike for OPEC+ crude production in August, when the group meets this Sunday. On May 31, OPEC+ agreed to a 411,000 bpd increase in crude production for July, following the same 411,000 bpd hike for June. Saudi Arabia has signaled that additional similar-sized increases in crude output could follow, which is viewed as a strategy to reduce oil prices and punish overproducing OPEC+ members, such as Kazakhstan and Iraq. OPEC+ is boosting output to reverse the 2-year-long production cut, gradually restoring a total of 2.2 million bpd of production. OPEC+ had previously planned to restore production between January and late 2025; however, production cuts won't be fully restored until September 2026. OPEC June crude production rose +360,000 bpd to a 1.5-year high of 28.10 million bpd. Gasoline prices have support from the American Automobile Association (AAA) projection that a record 61.6 million people will travel by car this Fourth of July holiday (June 28 to July 6), up +2.2% from last year and a sign of stronger gasoline demand. Oil prices continue to be undercut by tariff concerns ahead of the July 9 deadline when President Trump says he will implement reciprocal tariffs on imports from any countries that haven't yet reached a trade deal with the Trump administration. A decline in crude oil held worldwide on tankers is bullish for oil prices. Vortexa reported Monday that crude oil stored on tankers that have been stationary for at least seven days fell by -8.7% w/w to 80.22 million bbl in the week ended June 27. Wednesday's weekly EIA report was mixed for crude and products. On the bullish side, EIA distillate stockpiles fell by -1.7 million bbl, a larger draw than expectations of -1.2 million bbl. Also, crude supplies at Cushing, the delivery point for WTI futures, fell by -1.49 million bbl. On the bearish side, EIA crude inventories unexpectedly rose +3.85 million bbl versus expectations for a -2.7 million bbl draw. Also, EIA gasoline supplies rose by +4.19 million bbl, a larger build than expectations of +900,000 bbl. Wednesday's EIA report showed that (1) US crude oil inventories as of June 27 were -9.3% below the seasonal 5-year average, (2) gasoline inventories were -0.7% below the seasonal 5-year average, and (3) distillate inventories were -21.0% below the 5-year seasonal average. US crude oil production in the week ending June 27 was unchanged w/w at 13.433 million bpd, modestly below the record high of 13.631 million bpd from the week of December 6. Baker Hughes reported Thursday that active US oil rigs in the week ending July 4 fell by -7 to a 3.75-year low of 425 rigs. Over the past 2.5 years, the number of US oil rigs has fallen sharply from the 5.25-year high of 627 rigs reported in December 2022. On the date of publication, Rich Asplund did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data