Latest news with #JoshWolfe
Yahoo
3 days ago
- Business
- Yahoo
Could The Data Center Bubble Be About To Pop--Lux Capital Heavyweight Sees Warning Signs
Data centers have been one of the most pleasant surprises in the real estate sector, generating strong returns for real estate investment trust investors for the last several years. The massive facilities are mission-critical pieces of AI infrastructure, which explains why many of the world's biggest tech companies have multi-billion-dollar data center investments. However, Lux Capital partner Josh Wolfe is concerned that the data center sector is showing signs of a bubble ready to pop. Data center spending is on pace to exceed $405 billion in 2025, which is a 23% increase over 2024, according to This sector used to be dominated by data center REITs like Equinix (NASDAQ: EQIX) and Digital Realty Trust (NYSE: DLR). However, they now face competition from tech titans like Microsoft (NASDAQ: MSFT), Amazon (NASDAQ: AMZN), and Meta (NASDAQ: META), who would rather own and operate facilities than rent them in perpetuity. Don't Miss: GoSun's breakthrough rooftop EV charger already has 2,000+ units reserved — become an investor in this $41.3M clean energy brand today. Invest early in CancerVax's breakthrough tech aiming to disrupt a $231B market. Back a bold new approach to cancer treatment with high-growth potential. Runaway data center construction drives AI's ever-expanding capabilities, but Wolfe also believes it may be creating "irrational" demand. In May, he told the Axios AI summit that he sees parallels to tech booms of the past. Cloud computing and fiber optic network investments created lots of millionaires in the 1990s and early 2000s. Wolfe remembers that many of those investors were left holding the bag when production outpaced demand. 'I think that you're going to have the same phenomenon now,' said Wolfe. He noted the potential danger of groupthink in the tech sector, where building data centers seems prudent for individual companies. However, he fears that multiple companies building hyperscale data centers simultaneously, 'collectively becomes irrational.' 'It will not necessarily persist,' he warned. Wolfe also said thinks the potential fallout from the bubble popping extends to other sectors. Data centers consume incredible amounts of power, which has driven rapid growth in the nuclear energy sector. 'One take that is related to that is the demands for energy, which is presumed that, because you need all these data centers," Wolfe said. "Then you need small modular reactors, and so you're getting speculative capital that's going into the energy provision therein.' Trending: This Jeff Bezos-backed startup will allow you to become a landlord in just 10 minutes, with minimum investments as low as $100. Wolfe's warning is reminiscent of then- Federal Reserve Chair Alan Greenspan's worry over "irrational exuberance" in the marketplace. It's a cycle that has played itself out for almost as long as people have been investing. A sector gets hot, which causes more investor capital to flow in until suddenly, the sector is oversaturated. During these hot cycles, money continues flowing in long after the best deals have been snapped up. When overheated markets correct, what looked like a "can't miss" investment a few months ago suddenly becomes a white elephant. This causes a massive outflow of capital as all the investors head for the exit door at the same time. Wolfe has seen these cycles come and go throughout his career. "I think that that whole (data center) thing is going to end in disaster, mostly because, as clichéd as it is, history doesn't repeat. It rhymes,' he said at the Axios AI Summit. See Next: $100k in assets? Maximize your retirement and cut down on taxes: Schedule your free call with a financial advisor to start your financial journey – no cost, no obligation. Warren Buffett once said, "If you don't find a way to make money while you sleep, you will work until you die." Here's how you can earn passive income with just $100. Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Could The Data Center Bubble Be About To Pop--Lux Capital Heavyweight Sees Warning Signs originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.


The Guardian
10-06-2025
- Science
- The Guardian
When billion-dollar AIs break down over puzzles a child can do, it's time to rethink the hype
A research paper by Apple has taken the tech world by storm, all but eviscerating the popular notion that large language models (LLMs, and their newest variant, LRMs, large reasoning models) are able to reason reliably. Some are shocked by it, some are not. The well-known venture capitalist Josh Wolfe went so far as to post on X that 'Apple [had] just GaryMarcus'd LLM reasoning ability' – coining a new verb (and a compliment to me), referring to 'the act of critically exposing or debunking the overhyped capabilities of artificial intelligence … by highlighting their limitations in reasoning, understanding, or general intelligence'. Apple did this by showing that leading models such as ChatGPT, Claude and Deepseek may 'look smart – but when complexity rises, they collapse'. In short, these models are very good at a kind of pattern recognition, but often fail when they encounter novelty that forces them beyond the limits of their training, despite being, as the paper notes, 'explicitly designed for reasoning tasks'. As discussed later, there is a loose end that the paper doesn't tie up, but on the whole, its force is undeniable. So much so that LLM advocates are already partly conceding the blow while hinting at, or at least hoping for, happier futures ahead. In many ways the paper echoes and amplifies an argument that I have been making since 1998: neural networks of various kinds can generalise within a distribution of data they are exposed to, but their generalisations tend to break down beyond that distribution. A simple example of this is that I once trained an older model to solve a very basic mathematical equation using only even-numbered training data. The model was able to generalise a little bit: solve for even numbers it hadn't seen before, but unable to do so for problems where the answer was an odd number. More than a quarter of a century later, when a task is close to the training data, these systems work pretty well. But as they stray further away from that data, they often break down, as they did in the Apple paper's more stringent tests. Such limits arguably remain the single most important serious weakness in LLMs. The hope, as always, has been that 'scaling' the models by making them bigger, would solve these problems. The new Apple paper resoundingly rebuts these hopes. They challenged some of the latest, greatest, most expensive models with classic puzzles, such as the Tower of Hanoi – and found that deep problems lingered. Combined with numerous hugely expensive failures in efforts to build GPT-5 level systems, this is very bad news. The Tower of Hanoi is a classic game with three pegs and multiple discs, in which you need to move all the discs on the left peg to the right peg, never stacking a larger disc on top of a smaller one. With practice, though, a bright (and patient) seven-year-old can do it. What Apple found was that leading generative models could barely do seven discs, getting less than 80% accuracy, and pretty much can't get scenarios with eight discs correct at all. It is truly embarrassing that LLMs cannot reliably solve Hanoi. And, as the paper's co-lead-author Iman Mirzadeh told me via DM, 'it's not just about 'solving' the puzzle. We have an experiment where we give the solution algorithm to the model, and [the model still failed] … based on what we observe from their thoughts, their process is not logical and intelligent'. The new paper also echoes and amplifies several arguments that Arizona State University computer scientist Subbarao Kambhampati has been making about the newly popular LRMs. He has observed that people tend to anthropomorphise these systems, to assume they use something resembling 'steps a human might take when solving a challenging problem'. And he has previously shown that in fact they have the same kind of problem that Apple documents. If you can't use a billion-dollar AI system to solve a problem that Herb Simon (one of the actual godfathers of AI) solved with classical (but out of fashion) AI techniques in 1957, the chances that models such as Claude or o3 are going to reach artificial general intelligence (AGI) seem truly remote. So what's the loose thread that I warn you about? Well, humans aren't perfect either. On a puzzle like Hanoi, ordinary humans actually have a bunch of (well-known) limits that somewhat parallel what the Apple team discovered. Many (not all) humans screw up on versions of the Tower of Hanoi with eight discs. But look, that's why we invented computers, and for that matter calculators: to reliably compute solutions to large, tedious problems. AGI shouldn't be about perfectly replicating a human, it should be about combining the best of both worlds; human adaptiveness with computational brute force and reliability. We don't want an AGI that fails to 'carry the one' in basic arithmetic just because sometimes humans do. Whenever people ask me why I actually like AI (contrary to the widespread myth that I am against it), and think that future forms of AI (though not necessarily generative AI systems such as LLMs) may ultimately be of great benefit to humanity, I point to the advances in science and technology we might make if we could combine the causal reasoning abilities of our best scientists with the sheer compute power of modern digital computers. What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that these LLMs that have generated so much hype are no substitute for good, well-specified conventional algorithms. (They also can't play chess as well as conventional algorithms, can't fold proteins like special-purpose neurosymbolic hybrids, can't run databases as well as conventional databases, etc.) What this means for business is that you can't simply drop o3 or Claude into some complex problem and expect them to work reliably. What it means for society is that we can never fully trust generative AI; its outputs are just too hit-or-miss. One of the most striking findings in the new paper was that an LLM may well work in an easy test set (such as Hanoi with four discs) and seduce you into thinking it has built a proper, generalisable solution when it has not. To be sure, LLMs will continue to have their uses, especially for coding and brainstorming and writing, with humans in the loop. But anybody who thinks LLMs are a direct route to the sort of AGI that could fundamentally transform society for the good is kidding themselves. This essay was adapted from Gary Marcus's newsletter, Marcus on AI Gary Marcus is a professor emeritus at New York University, the founder of two AI companies, and the author of six books, including Taming Silicon Valley


India Today
09-06-2025
- India Today
Apple researchers say models like ChatGPT o3 look smart but collapse when faced with real complexity
They may talk the talk, but can they truly think it through? A new study by Apple researchers suggests that even the most advanced AI models like ChatGPT o3, Claude, and DeepSeek start to unravel when the going gets tough. These so-called 'reasoning' models may impress with confident answers and detailed explanations, but when faced with genuinely complex problems, they stumble – and sometimes fall flat. advertisementApple researchers have found that the most advanced large language models today may not be reasoning in the way many believe. In a recently released paper titled The Illusion of Thinking, researchers at Apple show that while these models appear intelligent on the surface, their performance dramatically collapses when they are faced with truly complex study looked at a class of models now referred to as Large Reasoning Models (LRMs), which are designed to "think" through complex tasks using a series of internal steps, often called a 'chain of thought.' This includes models like OpenAI's o3, DeepSeek-R1, and Claude 3.7 Sonnet Thinking. Apple's researchers tested how these models handle problems of increasing difficulty – not just whether they arrive at the correct answer, but how they reason their way The findings were striking. As problem complexity rose, the models' performance did not apparently degrade gracefully – it collapsed completely. 'They think more up to a point,' tweeted tech critique Josh Wolfe, referring to the findings. 'Then they give up early, even when they have plenty of compute left.' Apple's team built custom puzzle environments such as the Tower of Hanoi, River Crossing, and Blocks World to carefully control complexity levels. These setups allowed them to observe not only whether the models found the right answer, but how they tried to get found that:-At low complexity, traditional LLMs (without reasoning chains) performed better and were more efficient-At medium complexity, reasoning models briefly took the lead-At high complexity, both types failed completelyEven when given a step-by-step algorithm for solving a problem, so that they only needed to follow instructions, models still made critical mistakes. This suggests that they struggle not only with creativity or problem-solving, but with basic logical execution. The models also showed odd behaviour when it came to how much effort they put in. Initially, they 'thought' more as the problems got harder, using more tokens for reasoning steps. But once a certain threshold was reached, they abruptly started thinking less. This happened even when they hadn't hit any computational limits, highlighting what Apple calls a 'fundamental inference time scaling limitation.'advertisementCognitive scientist Gary Marcus said the paper supports what he's been arguing for decades: these systems don't generalise well beyond their training data. 'Neural networks can generalise within a training distribution of data they are exposed to, but their generalisation tends to break down outside that distribution,' Marcus wrote on Substack. He also noted that the models' 'reasoning traces' – the steps they take to reach an answer – can look convincing, but often don't reflect what the models actually did to reach a State University's Subbarao (Rao) Kambhampati, whose previous work has critiqued so-called reasoning models, was also echoed in Apple's findings, points out Marcus. Rao has shown that models often appear to think logically but actually produce answers that don't match their thought process. Apple's experiments back this up by showing models generate long reasoning paths that still lead to the wrong answer, particularly as problems get the most damning evidence came when Apple tested whether models could follow exact instructions. In one test, they were handed the algorithm to solve the Tower of Hanoi puzzle and asked to just execute it. The models still failed once the puzzle complexity passed a certain conclusion is blunt: today's top models are 'super expensive pattern matchers' that can mimic reasoning only within familiar settings. The moment they're faced with novel problems – ones just outside their training data – they findings have serious implications for claims that AI is becoming capable of human-like reasoning. As the paper puts it, the current approach may be hitting a wall, and overcoming it could require an entirely different way of thinking about how we build intelligent systems. In short, we are still leaps away from AGI.


Axios
05-06-2025
- Business
- Axios
Venture capital firm seeks to offset some Trump research cuts
Many venture capitalists are alarmed by the Trump administration's cuts to academic research, at Harvard and beyond, given that such funding has helped forge such foundational technologies as the internet and gene editing. Lux Capital wants to help fill the void. Driving the news: Lux, whose portfolio includes Anduril and Databricks, last month carved $100 million out of existing funds to back stranded scientists. It refers to the effort as a "helpline" for researchers at a crossroads, including those who haven't viewed themselves as entrepreneurs or who feel their work is too early for commercialization. In some cases this could mean forming and funding a de novo startup. In others, it could mean having an existing Lux portfolio company sponsor, license, or acquire ongoing research. What they're saying:"You look at so many of our publicly traded companies that come from academically-derived science, from Genentech ... to Google," explained Lux co-founder Josh Wolfe, during an on-stage conversation in New York City at the Axios AI+ Summit. "It's absolutely critical — probably 10 or 15% of our portfolio are things totally derived from university research." "The sort of sledgehammer approach as opposed to a surgical approval is hurting American science ... [and] national security," he adds. Behind the scenes: Lux's endeavor began when its partners began getting inundated with calls for advice from friends and peers. Eventually, a decision was made to take it national. Look ahead: The big question now is if other VC firms will do something similar, either in partnership with Lux or on their own. Particularly the small cohort of investors who have cheered the cuts on social media, saying private industry can make up the difference. At the very least, they could put their money where their mouths are.


Axios
04-06-2025
- Business
- Axios
Data center boom may end up being "irrational," investor warns
A rush by large tech players to build so many data centers for AI may end up being "irrational," and there's particular risk in building small nuclear reactors to run them, a prominent tech investor warned Wednesday. Why it matters: Big Tech has been investing billions of dollar into data centers and energy sources to power them. Just this week, Meta announced a deal to buy the power from an operating traditional nuclear station in Illinois that was set to retire in 2027. Zoom in: Speaking at Axios' AI Summit in New York, Lux Capital co-founder and partner Josh Wolfe compared the build-out of data center infrastructure to previous bubbles in fiber-optic networking and cloud computing. "I think that you're going to have the same phenomenon now," said Wolfe, whose firm Lux backs deeptech and science startups across sectors like AI, defense, and biotech. What any one individual hyperscaler is doing to build out infrastructure is rational, but "collectively becomes irrational," said Wolfe. "It will not necessarily persist." The intrigue: Wolfe raised a flag specifically on the build-out of the power infrastructure for these data centers. "One take that is related to that is the demands for energy, which is presumed that, because you need all these data centers, then you need small modular reactors, and so you're getting speculative capital that's going into the energy provision therein," Wolfe said. "So I think that that whole thing is going to end in disaster, mostly because as cliched as it is, history doesn't repeat. It rhymes."