
IBM's Vision For A Large-Scale Fault-Tolerant Quantum Computer By 2029
IBM's vision for its large-scale fault-tolerant Starling quantum computer
IBM has just made a major announcement about its plans to achieve large-scale quantum fault tolerance before the end of this decade. Based on the company's new quantum roadmap, by 2029 IBM expects to be able to run accurate quantum circuits with hundreds of logical qubits and hundreds of millions of gate operations. If all goes according to plan, this stands to be an accomplishment with sweeping effects across the quantum market — and potentially for computing as a whole.
In advance of this announcement, I received a private briefing from IBM and engaged in detailed correspondence with some of its quantum researchers for more context. (Note: IBM is an advisory client of my firm, Moor Insights & Strategy.) The release of the new roadmap offers a good opportunity to review what IBM has already accomplished in quantum, how it has adapted its technical approach to achieve large-scale fault tolerance and how it intends to implement the milestones of its revised roadmap across the next several years.
Let's dig in.
First, we need some background on why fault tolerance is so important. Today's quantum computers have the potential, but not yet the broader capability, to solve complex problems beyond the reach of our most powerful classical supercomputers. The current generation of quantum computers are fundamentally limited by high error rates that are difficult to correct and that prevent complex quantum algorithms from running at scale. While there are numerous challenges being tackled by quantum researchers around the world, there is broad agreement that these error rates are a major hurdle to be cleared.
In this context, it is important to understand the difference between fault tolerance and quantum error correction. QEC uses specialized measurements to detect errors in encoded qubits. And although it is also a core mechanism used in fault tolerance, QEC alone can only go so far. Without fault-tolerant circuit designs in place, errors that occur during operations or even in the correction process can spread and accumulate, making it exponentially more difficult for QEC on its own to maintain logical qubit integrity.
Reaching well beyond QEC, fault-tolerant quantum computing is a very large and complex engineering challenge that applies a broad approach to errors. FTQC not only protects individual computational qubits from errors, but also systemically prevents errors from spreading. It achieves this by employing clever fault-tolerant circuit designs, and by making use of a system's noise threshold — that is, the maximum level of errors the system can handle and still function correctly. Achieving the reliability of FTQC also requires more qubits.
FTQC can potentially lower error rates much more efficiently than QEC. If an incremental drop in logical error rate is desired, then fault tolerance needs only a small polynomial increase in the number of qubits and gates to achieve the desired level of accuracy for the overall computation. Despite its complexity, this makes fault tolerance an appealing and important method for improving quantum error rates.
IBM's first quantum roadmap, released in 2020
Research on fault tolerance goes back several decades. IBM began a serious effort to build a quantum computer in the late 1990s when it collaborated with several leading universities to build a two-qubit quantum computer capable of running a small quantum algorithm. Continuing fundamental research eventually led to the 2016 launch of the IBM Quantum Experience, featuring a five-qubit superconducting quantum computer accessible via the cloud.
IBM's first quantum roadmap, released in 2020 (see the image above), detailed the availability of the company's 27-qubit Falcon processor in 2019 and outlined plans for processors with a growing number of qubits in each of the subsequent years. The roadmap concluded with the projected development in 2023 of a research-focused processor, the 1,121-qubit Condor, that was never made available to the public.
However, as IBM continued to scale its qubit counts and explore error correction and error mitigation, it became clear to its researchers that monolithic processors were insufficient to achieve the long-term goal of fault-tolerant quantum computing. To achieve fault tolerance in the context of quantum low-density parity-check (much more on qLDPC below), IBM knew it had to overcome three major issues:
This helps explain why fault tolerance is such a large and complex endeavor, and why monolithic processors were not enough. Achieving all of this would require that modularity be designed into the system.
IBM's shift to modular architecture first appeared in its 2022 roadmap with the introduction for 2024 of multi-chip processors called Crossbill and Flamingo. Crossbill was a 408-qubit processor that demonstrated the first application of short-range coupling. And Flamingo was a 1,386-qubit quantum processor that was the first to use long-range coupling.
For more background on couplers, I previously wrote a detailed Forbes.com article explaining why IBM needed modular processors and tunable couplers. Couplers play an important role in IBM's current and future fault-tolerant quantum computers. They allow qubits to be logically scaled but without the difficulty, expense and additional time required to fabricate larger chips. Couplers also provide architectural and design flexibility. Short-range couplers provide chip-to-chip parallelization by extending IBM's heavy-hex lattice across multiple chips, while long-range couplers use cables to connect modules so that quantum information can be shared between processors.
A year later, in 2023, IBM scientists made an important breakthrough by developing a more reliable way to store quantum information using qLDPC codes. These are also called bivariate bicycle codes, and you'll also hear this referred to as the gross code because it has the capability to encode 12 logical qubits into a gross of 144 physical qubits with 144 ancilla qubits, making a total of 288 physical qubits for error correction.
Previously, surface code was the go-to error-correction code for superconducting because it had the ability to tolerate high error rates, along with the abilities to scale, use the nearest neighbor and protect qubits against bit-flip and phase-flip errors. It's important to note that IBM has verified that qLDPC performs error correction just as effectively and efficiently as surface code. Yet these two methods do not bring the same level of benefit. Although qLDPC code and surface code perform equally well in terms of error correction, qLDPC code has the significant advantage of needing only one-tenth as many qubits. (More details on that below.)
This brings us to today's state of the art for IBM quantum. Currently, IBM has a fleet of quantum computers available over the cloud and at client sites, many of which are equipped with 156-qubit Heron processors. According to IBM, Heron has the highest performance of any IBM quantum processor. Heron is currently being used in the IBM Quantum System Two and it is available in other systems as well.
IBM 2025 quantum innovation roadmap, showing developments from 2016 to 2033 and beyond
IBM's new quantum roadmap shows several major developments on the horizon. In 2029 IBM expects to be the first organization to deliver what has long been the elusive goal of the entire quantum industry. After so many years of research and experimentation, IBM believes that in 2029 it will finally deliver a fault-tolerant quantum computer. By 2033, IBM also believes it will be capable of building a quantum-centric supercomputer capable of running thousands of logical qubits and a billion or so gates.
Before we go further into specifics about the milestones that IBM projects for this new roadmap, let's dig a little deeper into the technical breakthroughs enabling this work.
As mentioned earlier, one key breakthrough IBM has made comes in its use of gross code (qLDPC) for error correction, which is much more efficient than surface code.
Comparison of surface code versus qLDPC error rates
The above chart shows the qLDPC physical and logical error rates (diamonds) compared to two different surface code error rates (stars). The qLDPC code uses a total of 288 physical qubits (144 physical code qubits and 144 check qubits) to create 12 logical qubits (red diamond). As illustrated in the chart, one instance of surface code requires 2,892 physical qubits to create 12 logical qubits (green star) and the other version of surface code requires 4,044 physical qubits to create 12 logical qubits (blue star). It can be easily seen that qLDPC code uses far fewer qubits than surface code yet produces a comparable error rate.
Connectivity between the gross code and the LPU
Producing a large number of logical and physical qubits with low error rates is impressive; indeed, as explained earlier, large numbers of physical qubits with low error rates are necessary to encode and scale logical qubits. But what really matters is the ability to successfully run gates. Gates are necessary to manipulate qubits and create superpositions, entanglement and operational sequences for quantum algorithms. So, let's take a closer look at that technology.
Running gates with qLDPC codes requires an additional set of physical qubits known as a logical processing unit. The LPU has approximately 100 physical qubits and adds about 35% of ancilla overhead per logical qubit to the overall code. (If you're curious, a similar low to moderate qubit overhead would also be required for surface code to run gates.) LPUs are physically attached to qLDPC quantum memory (gross code) to allow encoded information to be monitored. LPUs can also be used to stabilize logical computations such as Clifford gates (explained below), state preparations and measurements. It is worth mentioning that the LPU itself is fault-tolerant, so it can continue to operate reliably even with component failures or errors.
IBM already understands the detailed connectivity required between the LPU and gross code. For simplification, the drawing of the gross code on the left above has been transformed into a symbolic torus (doughnut) in the drawing on the right; that torus has 12 logical qubits consisting of approximately 288 physical qubits, accompanied by the LPU. (As you look at the drawings, remember that 'gross code' and 'bivariate bicycle code' are two terms for the same thing.) The drawing on the right appears repeatedly in the diagrams below, and it will likely appear in future IBM documents and discussions about fault tolerance.
The narrow rectangle at the top of the right-hand configuration is called a 'bridge' in IBM research papers. Its function is to couple one unit to a neighboring unit with 'L-couplers.' It makes the circuits fault-tolerant inside the LPU, and it acts as a natural connecting point between modules. These long-distance couplers, about a meter in length, are used for bell pair generation. It's a method that allows the entanglement of logical qubits.
So what happens when multiple of these units are coupled together?
IBM fault-tolerant quantum architecture
Above is a generalized configuration of IBM's future fault-tolerant architecture. As mentioned earlier, each torus contains 12 logical qubits created by the gross code through the use of approximately 288 physical qubits. So, for instance, if a quantum computer was designed to run 96 logical qubits, it would be equipped with eight torus code blocks (8 x 12 = 96) which would require a total of approximately 2,304 physical qubits (8 x 288) plus eight LPUs.
Two special quantum operations are needed for quantum computers to run all the necessary algorithms plus perform error correction. These two operations are Clifford gates and non-Clifford gates. Clifford gates — named after the 19th-century British mathematician William Clifford — handle error correction in a way that allows error-correction code to fix mistakes. Clifford gates are well-suited for FTQC because they are able to limit the spread of errors. Reliability is critical for practical fault-tolerant quantum systems, so running Clifford gates helps ensure accurate computations. The other necessary quantum operation is non-Clifford gates (particularly T-gates).
A quantum computer needs both categories of gates so it can perform universal tasks such as chemistry simulations, factoring large numbers and other complex algorithms. However, there is a trick for using both of these operations together. Even though T-gates are important, they also break the symmetry needed by Clifford gates for error correction. That's where the 'magic state factory' comes in. It implements the non-Clifford group (T-gates) by combining a stream of so-called magic states alongside Clifford gates. In that way, the quantum computer can maintain its computational power and fault tolerance.
IBM's research has proven it can run fault-tolerant logic within the stabilizer (Clifford) framework. However, without the extra non‑Clifford gates, a quantum computer would not be able to execute the full spectrum of quantum algorithms.
IBM fault-tolerant quantum roadmap
Now let's take a closer look at the specific milestones in IBM's new roadmap that will take advantage of the breakthroughs explained above, and how the company plans to create a large-scale fault-tolerant quantum computer within this decade.
IBM expects to begin fabricating and testing the Loon processor sometime this year. The Loon will use two logical qubits and approximately 100 physical qubits. Although the Loon will not use the gross code, it will be using a smaller code with similar hardware requirements.
IBM has drawn on its past four-way coupler research to develop and test a six-way coupler using a central qubit connected through tunable couplers to six neighboring qubits, a setup that demonstrates low crosstalk and high fidelity between connections. IBM also intends to demonstrate the use of 'c-couplers' to connect Loon qubits to non-local qubits. Couplers up to 16mm in length have been tested, with a goal of increasing that length to 20 mm. Longer couplers allow connections to be made over more areas of the chip. So far, the longer couplers have also maintained low error rates and acceptable coherence times — in the range of several hundred microseconds.
In this phase of the roadmap, IBM plans to test one full unit of the gross code, long c-couplers and real-time decoding of the gross code. IBM also plans a demonstration of quantum advantage in 2026 via the Heron (a.k.a. Nighthawk) platform with HPC.
The Cockatoo design employs two blocks of gross code connected to LPUs to create 24 logical qubits using approximately 288 physical qubits. In this year, IBM aims to test L-couplers and module-to-module communications capability. IBM also plans to test Clifford gates between the two code blocks, giving it the ability to perform computations, but not yet universal computations.
A year later, the Starling processor should be equipped with approximately 200 logical qubits. Required components, including magic state distillation, will be tested. Although only two blocks of gross code are shown in the illustrative diagram above, the Starling will in fact require about 17 blocks of gross code, with each block connected to an LPU.
The estimated size of IBM's 2029 large-scale fault-tolerant Starling quantum computer in a ... More datacenter setting, with human figures included for size comparison
This is the year IBM plans to deliver the industry's first large‑scale, fault‑tolerant quantum computer — equipped with approximately 200 logical qubits and able to execute 100 million gate operations. A processor of this size will have approximately 17 gross code blocks equipped with LPUs and magic state distillation.
IBM expects that quantum computers during this period will run billions of gates on several thousand circuits to demonstrate the full power and potential of quantum computing.
IBM milestones in its roadmap for large-scale, fault-tolerant quantum computers
Although there have been a number of significant quantum computing advancements in recent years, building practical, fault-tolerant quantum systems has been — and still remains — a significant challenge. Up until now, this has largely been due to a lack of a suitable method for error correction. Traditional methods such as surface code have important benefits, but limitations, too. Surface code, for instance, is still not a practical solution because of the large numbers of qubits required to scale it.
IBM has overcome surface code's scaling limitation through the development of its qLDPC codes, which require only a tenth of the physical qubits needed by surface code. The qLDPC approach has allowed IBM to develop a workable architecture for a near-term, fully fault-tolerant quantum computer. IBM has also achieved other important milestones such as creating additional layers in existing chips to allow qubit connections to be made on different chip planes. Tests have shown that gates using the new layers are able to maintain high quality and low error rates in the range of existing devices.
Still, there are a few areas in need of improvement. Existing error rates are around 3x10^-3, which needs to improve to accommodate advanced applications. IBM is also working on extending coherence times. Using isolated test devices, IBM has determined that coherence is running between one to two milliseconds, and up to four milliseconds in some cases. Since it appears to me that future utility-scale algorithms and magic state factories will need between 50,000 to 100,000 gates between resets, further improvement in coherence may be required.
As stated earlier, IBM's core strategy relies on modularity and scalability. The incremental improvement of its processors through the years has allowed IBM to progressively develop and test its designs to incrementally increase the number of logical qubits and quantum operations — and, ultimately, expand quantum computing's practical utility. Without IBM's extensive prior research and its development of qLDPC for error correction, estimating IBM's chance for success would largely be guesswork. With it, IBM's plan to release a large-scale fault-tolerant quantum computer in 2029 looks aggressive but achievable.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
30 minutes ago
- Yahoo
Trump threatens Senate Republicans who defy him as Elon Musk attacks ‘utterly insane' megabill
President Donald Trump threatened Senate Republicans who defy him and his 'Big, Beautiful Bill' as the legislation cleared a key test vote in a dramatic night in the upper chamber of Congress. After negotiations dragged on for hours Saturday evening, the Senate voted 51 to 49 to open a debate on the legislation, moving one step closer to landing the bill on Trump's desk by his self-imposed Fourth of July deadline. Trump targeted Republican Sen. Thom Tillis of North Carolina, who voted against opening the debate after he argued the package would 'result in tens of billions of dollars in lost funding' for his state. The president said 'numerous people have come forward wanting to run' in a primary election against Tillis, Trump wrote on Truth Social after returning to the White House to watch senators scramble for votes. 'I will be meeting with them over the coming weeks, looking for someone who will properly represent the Great People of North Carolina,' Trump said. Before votes were underway, Trump's 'first buddy' Elon Musk momentarily re-entered politics when he attacked the bill as 'utterly insane and destructive' in a post on X. The world's wealthiest man said the bill will 'destroy millions of jobs in America and cause immense strategic harm to our country' and give 'handouts to industries of the past while severely damaging industries of the future.' The motion to proceed with the legislation — which includes sweeping spending cuts to pay for Trump's tax cuts he signed into law in 2017, as well as increased spending for the military, oil exploration, and immigration enforcement — came down to the wire, as Vice President JD Vance showed up at the last moment in the event of a tie break. Senate Republicans scrambled for sufficient support to pass the motion to proceed with votes on the mammoth bill. Final text for the sprawling 940-page bill was released Friday evening. Budget analysts predict the bill — if approved — could explode the national debt by more than $4 trillion. The bill's tax cuts would amount to $4.5 trillion in lost revenue, according to the nonpartisan Joint Committee on Taxation. If the bill does pass the Senate, it will return to the House of Representatives, which passed it last month. But plenty of House Republicans have objected to the Senate's changes, teeing up yet another legislative battle over Trump's massive bill. Trump lobbied senators on Saturday while playing golf with Senators Rand Paul of Kentucky and Lindsey Graham of South Carolina. Democratic Senate Leader Chuck Schumer of New York said Democrats would object to the bill moving forward without the text being read on the Senate floor. 'We will be here all night if that's what it takes to read it,' he said Saturday. Reading the nearly 1,000-page bill on the floor is estimated to take 15 hours. Republicans, who have 53 seats in the Senate, plan to pass the bill using the process of budget reconciliation. That would allow them to sidestep a filibuster from Democrats as long as the legislation relates to the budget. For the past week, the Senate parliamentarian's office has issued advisories about which parts do not comply with the rules of reconciliation. The biggest sticking point was major changes to Medicaid, with Republicans proposing the steepest cuts to the federal healthcare program in history. The legislation would add work requirements for certain Medicaid recipients and limit how much money states can tax health care providers like hospitals and nursing homes to raise money for Medicaid. But the American Hospital Association said this would devastate rural hospitals that rely on Medicaid dollars. The parliamentarian removed the provider tax provision, but the new version of the bill simply delays when the cap goes into effect. Hospitals in House Speaker Mike Johnson's home state of Louisiana were compelled to write to him Saturday to warn that the Medicaid cuts 'would be historic in their devastation.' Sen. Ron Wyden of Oregon, the top Democrat on the Senate Finance Committee, also warned that the bill would cut funding for Medicaid by $930 billion, citing an upcoming Congressional Budget Office analysis. Wyden accused Republican senators of pushing a 'rushed and reckless' process. 'Just as before, these cruel cuts to Americans' health care will strike a mortal blow to rural health care, and threaten the health and safety of kids, seniors, Americans with disabilities, and working families across the country,' he wrote. Tillis, who hails from a state with a large number of rural hospitals and that expanded Medicaid under the Affordable Care Act in 2023, said he was a 'no' on the motion to proceed because of Medicaid. 'It will cause a lot of people to have to be moved off of Medicaid,' he told The Independent Friday evening. 'It's just inescapable. The price tag's too high.' Tillis, who is up for re-election in 2028, outlined his opposition to the bill again on Saturday, saying in a statement that the bill would 'result in tens of billions of dollars in lost funding' for his state, including hospitals and rural communities. 'This will force the state to make painful decisions like eliminating Medicaid coverage for hundreds of thousands in the expansion population, and even reducing critical services for those in the traditional Medicaid population,' he added. The bill would see Trump gain some $350 billion to pursue his anti-immigration agenda, including $46 billion for the U.S.– Mexico border wall and another $45 billion to build capacity to detain another 100,000 people in immigration detention centers. In order to meet his goal of deporting some 1 million people per year, the bill would also allocate money to hire 10,000 Immigration and Customs Enforcement officers, with $10,000 signing bonuses and a surge of border patrol agents. The legislation, which contains roughly $3.8 trillion in tax cuts, would extend the 2017 cuts that Trump signed into law during his first administration, which are set to expire at the end of the year. The existing tax rates and brackets would become permanent under the bill. The bill also would include some measures Trump campaigned on, including up to $25,000 in deductions on tipped wages and $12,500 for overtime pay through 2028. Americans who receive food stamps through the Supplemental Nutritional Assistance Program, or SNAP, would face new 'hardship' under the current legislation as billions would be slashed from the scheme, advocacy groups warn. The bill increases the age at which 'able-bodied' adults without dependent children must work to receive food assistance from 54 to 64 years old, the year before many seniors become eligible for Social Security and Medicare. It also would mandate that parents with children 14 and older must work at least 20 hours per week to receive SNAP benefits. 'Already, states like Texas have opted out of programs like Summer EBT and denying thousands of children critical food benefits during the summer because of concerns over state obligations to cover SNAP benefit costs,' the Food Research and Action Center warned. 'It's unimaginable the number of children who would miss out on the nutrition they need if this harmful bill is passed.' Rep. Angie Craig of Minnesota, the ranking Democrat on the House Agriculture Committee, previously told The Independent that changes to SNAP in the House version of the bill 'may be one of the most egregious items in the entire markup.'


Gizmodo
31 minutes ago
- Gizmodo
CEOs Are Quietly Telling Us the Truth: AI Is Replacing You
The fear is real. In meetings, Slack chats, and after-work drinks, one question is quietly eating away at millions of employees: Will AI take my job? In public, CEOs like to sound reassuring. They say generative AI will 'enhance productivity' or 'streamline operations.' But when you actually read what they're telling their own employees, or what slips out in investor memos, the message is chilling: virtual workers are here, and they're not just assistants. They're replacements. Let's take a closer look at what some of the world's most powerful tech CEOs are saying. Not in hype videos, but in official internal messages, blog posts, and investor updates. Amazon CEO Andy Jassy recently published a company-wide message that sounds reasonable, until you actually read it. 'As we roll out more generative AI and agents, it should change the way our work is done. We will need fewer people doing some of the jobs that are being done today… We expect this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.' The key phrase? 'Next few years.' That's corporate speak for 2026 to 2028. Not ten years away. This is soon. Jassy is not talking about automating only simple or repetitive tasks. He's preparing employees for a reality where AI replaces entire job categories across the board, and where hiring slows or stops altogether for roles that machines can now do. In a memo posted to LinkedIn, Duolingo CEO Luis von Ahn was even more blunt. 'Most functions will have specific initiatives to fundamentally change how they work… Headcount will only be given if a team cannot automate more of their work.' Translation: No more hiring unless your job is impossible for AI to do. The company is betting that most teams will soon need fewer humans. Shopify CEO Tobi Lütke shared a similar directive on X: 'Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI… What would this area look like if autonomous AI agents were already part of the team?' Lütke is openly asking managers to reimagine teams as if AI agents are already integrated, and to justify why any humans are still necessary. — tobi lutke (@tobi) April 7, 2025The message from these CEOs is clear: human employees are now the last resort. The new default is automation. Salesforce CEO Marc Benioff recently stated that AI is already doing 50% of the work within his company, shortly before announcing another 1,000 job cuts. The CEO of Klarna, a major fintech company, was even more blunt, revealing that AI has already allowed the company to reduce its workforce by 40%. These aren't future scenarios. This is already happening. The reason for this sudden shift is the rapid evolution of AI technology. As OpenAI CEO Sam Altman explained in a recent podcast, the latest 'reasoning models' have made a critical leap. In simple terms, these AI systems can now do more than just find information; they can 'think' through complex, multi-step problems. Altman suggested these models can reason on par with someone holding a PhD, meaning they are now capable of performing the high level analytical tasks once reserved for highly educated humans. This capability is being actively harnessed. Three sources working at major AI labs told Gizmodo that they are training powerful models to perform real world tasks in nearly every 'knowledge work' profession, including banking, financial analysis, insurance, law, and even journalism. These sources, who requested anonymity as their contracts prohibit them from speaking publicly, described how their work is used in side by side comparisons with AI models to refine the technology until it can produce professional grade output with minimal errors. Virtual employees are already doing our jobs; the current phase is simply about making them more perfect. The 'next few years' Jassy spoke of may be closer to two years at most. Consider the tech industry's recent layoff trends. In 2024, 551 tech companies laid off nearly 152,922 employees, according to data from The pace has accelerated dramatically this year. In just the first six months of 2025, 151 tech companies have already laid off over 63,823 people. On average a tech company cut 277 workers in 2024. If that rate is maintained for the rest of the year, the average number of layoffs per tech company in 2025 would soar to 851, roughly three times the 2024 average. While there is no direct evidence linking all these layoffs to AI, the trend is happening during a period of record economic strength. The Nasdaq just closed at an all time high, and eight of the world's ten largest companies are in the tech sector. Profitable, growing companies are shedding workers at an alarming rate, and the quiet implementation of AI is the most logical explanation. Tech CEOs won't tell you outright that you're being replaced. But the memos speak for themselves. AI is already here, and your company is likely building a roadmap to automate you out of your role. One internal pilot project at a time. One chatbot at a time. One hiring freeze at a time. If you want to understand what's next for the American workforce, don't listen to the marketing. Read the footnotes in the CEO's blog. Because they're already telling you the truth.
Yahoo
31 minutes ago
- Yahoo
Is Micron (MU) the Next Big Winner in AI Memory? Analysts Think So
Micron Technology, Inc. (NASDAQ:MU) is one of the . On June 26, UBS raised the firm's price target on the stock to $155 from $120 and kept a 'Buy' rating on the shares. In a research note, the analyst told investors how Micron has delivered HBM revenue and gross margin that met or slightly exceeded investor expectations. 'MU delivered against the only real investor expectations we heard into the call – HBM [high bandwidth memory] revenue and gross margin, both of which were in-line to a little better than bogeys.' A close up of a circuit board, its microchips creating a powerful computing system. The company also boasts a strong financial position as demonstrated by its robust liquidity. With HBM becoming an important part of the DRAM business, it represents 6-7% of DRAM bits. However, they take up around 19-20% of production space as per UBS estimates, which is why Micron focuses on selling these chips to higher-value markets. UBS believes that the supply-constraint dynamics will stay until 2026, until Micron and its peers install new manufacturing capacity. The firm thinks this will be done 'carefully and strategically' in order to maintain favorable market conditions. Micron Technology, Inc. (NASDAQ:MU) develops and sells memory and storage products for data centers, mobile devices, and various industries worldwide. While we acknowledge the potential of MU as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 AI Stocks in the Spotlight and . Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data