Latest news with #GB200


Globe and Mail
20 hours ago
- Business
- Globe and Mail
NVIDIA Leads in Data Center GPU Market: Will Blackwell Keep It Ahead?
NVIDIA Corporation NVDA continues to dominate the data center market, driven by its latest Blackwell graphics processing unit (GPU) architecture. In the first quarter of fiscal 2026, the company generated $39.1 billion in revenues from the data center market, representing a 73% year-over-year increase. On the last-quarter earnings call, NVDA stated that Blackwell now contributes nearly 70% of the data center segment's compute revenues, with demand led by artificial intelligence (AI) factories and the rise of advanced reasoning models. NVIDIA's Blackwell platform, particularly the GB200, is built for large-scale AI inference. On the last earnings call, management stated that major cloud players are rapidly deploying Blackwell GPUs, nearly 72,000 GPUs per week by each hyperscaler. With stronger manufacturing yields and expanded availability, Blackwell has become NVIDIA's fastest product ramp in history. NVIDIA is also preparing to ship its next-gen GB300 during the calendar third quarter of 2025. With increased high-bandwidth memory and an efficient drop-in design, the GB300 chip promises a 50% performance boost over GB200. Early sampling has already begun at major cloud service providers. It's not only hardware that has contributed to Blackwell's success. NVIDIA's software ecosystem, including CUDA, NeMo and its inference microservices, allows developers to fully utilize Blackwell's potential. This deep integration makes switching away from NVIDIA harder for customers. As the AI wave grows and more companies build AI factories globally, NVIDIA's lead could strengthen. If Blackwell maintains its current pace and NVIDIA continues to support it with a strong ecosystem, the company's leadership in data centers is likely to continue. Our model estimates indicate that the company's revenues from the data center end-market will witness a CAGR of 30.3% through fiscal 2025 to fiscal 2028. NVIDIA Rivals AMD and Intel Up Their Game in AI Data Centers NVIDIA's major competitors, Advanced Micro Devices AMD and Intel INTC, are also stepping up their capabilities in the data center AI chip market. Advanced Micro Devices' MI300X GPUs are gaining attention for their high memory and power efficiency. Several hyperscalers are testing AMD's solutions as alternatives to NVIDIA's Blackwell, especially in cost-sensitive or specialized workloads. Advanced Micro Devices is also building a strong software stack to grab more customers. Intel is focusing on both CPUs and AI accelerators to grab a market share in the data center space. The company is promoting its Gaudi 3 AI chips as a low-cost option for training and inference. Intel is also working with major cloud providers to expand the adoption of its AI hardware. NVIDIA's Price Performance, Valuation and Estimates Shares of NVIDIA have risen around 31.6% year to date against the Zacks Computer and Technology sector's gain of 10.9%. From a valuation standpoint, NVDA trades at a forward price-to-earnings ratio of 35.84, higher than the sector's average of 27.86. Image Source: Zacks Investment Research The Zacks Consensus Estimate for NVIDIA's fiscal 2026 and 2027 earnings implies a year-over-year increase of approximately 42.5% and 32.2%, respectively. Estimates for fiscal 2026 and 2027 have been revised upward in the past 30 days. Image Source: Zacks Investment Research NVIDIA currently carries a Zacks Rank #3 (Hold). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Higher. Faster. Sooner. Buy These Stocks Now A small number of stocks are primed for a breakout, and you have a chance to get in before they take off. At any given time, there are only 220 Zacks Rank #1 Strong Buys. On average, this list more than doubles the S&P 500. We've combed through the latest Strong Buys and selected 7 compelling companies likely to jump sooner and climb higher than any other stock you could buy this month. You'll learn everything you need to know about these exciting trades in our brand-new Special Report, 7 Best Stocks for the Next 30 Days. Download the report free now >> Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Intel Corporation (INTC): Free Stock Analysis Report
Yahoo
6 days ago
- Business
- Yahoo
OpenAI and Oracle ink deal to build massive Stargate data center, total project will power 2 million AI chips — Stargate partner SoftBank not involved in the project
When you buy through links on our articles, Future and its syndication partners may earn a commission. Among the concerns raised about the Stargate project, which involves partnerships with OpenAI, Oracle, and SoftBank, were scarce details about infrastructure support. Little by little, the companies disclosed their intentions and, on Tuesday, OpenAI and Oracle announced plans to build an additional 4.5 gigawatts (GW) of Stargate data center infrastructure in the U.S., pushing OpenAI's total planned capacity beyond 5 GW. Interestingly, SoftBank is not involved in financing this buildout, despite being part of the Stargate project. Under the terms of the plan announced in January, OpenAI, Oracle, and Softbank plan to build 20 data centers each measuring 500,000 square feet (46,450 square meters). However, it was unclear how they intended to power the data centers, as it does not look like the U.S. infrastructure has enough spare capacity to power the additional AI servers, cooling systems, and networking equipment used in AI data centers unless some sort of additional infrastructure is built. The announced 4.5 GW of infrastructure indeed refers primarily to electrical power availability, which is among the limiting factors for AI development these days. OpenAI claims that the expanded infrastructure of 5 GW will enable its data centers to power over two million AI processors, though it does not disclose whether the infrastructure is meant to support 1.4 kW Blackwell Ultra processors or 3.6 kW Rubin Ultra processors. If a 5 GW infrastructure were to power only AI GPUs, then it could feed 3.571 million Blackwell Ultra or 1.388 million Rubin Ultra GPUs. However, AI accelerators typically consume only half of the entire data center's power, without taking into account power usage effectiveness (PUE), so the actual number of supported GPUs would be lower. The new 4.5 GW-capable facilities may be built in states such as Texas, Michigan, Wisconsin, and Wyoming, though exact locations are still being finalized. This is in addition to an existing site under construction in Abilene, Texas, which OpenAI considers a proof-of-concept facility to ensure its ability to deploy infrastructure at scale and speed. OpenAI believes that lessons learned from Abilene will help with the execution of subsequent sites. Parts of the Abilene facility — Stargate I — are now active as Oracle began installing server racks based on Nvidia's GB200 platform last month. OpenAI has begun utilizing this infrastructure to conduct early-stage AI training and inference tasks as part of its next-generation research initiatives. Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Yahoo
7 days ago
- Business
- Yahoo
Elon Musk: 1M Nvidia GPUs? Nah, My Supercomputers Need the Power of 50M
Elon Musk isn't stopping at acquiring 1 million Nvidia GPUs for AI training. The billionaire wants millions more as his startup xAI races to beat the competition on next-generation AI systems. Musk today tweeted that xAI aims for compute power that's on par with 50 million Nvidia H100 GPUs, the enterprise-grade graphics chip widely used for AI training and running chatbots. "The xAI goal is 50 million in units of H100 equivalent-AI compute (but much better power-efficiency) online within 5 years,' he said. Musk's tweet comes a day after rival Sam Altman, the CEO of OpenAI, wrote in his own post about plans to run 'well over 1 million GPUs by the end of this year,' with the goal of exponentially scaling up the compute power by '100x.' Meta CEO Mark Zuckerberg, meanwhile, has a similar goal; he wants mega data centers devoted to developing AI super intelligence. These growing AI investments underscore how expensive it is to scale up (and attract top talent). Musk's tweet doesn't mean he'll try to buy 50 million GPUs, though. The H100 was introduced in 2022 before Nvidia began offering more powerful models, including in the GB200, which can reportedly deliver an up to 2.5 times performance boost. Nvidia has also released a roadmap that outlines two additional GPU architectures, Rubin and Feynman, which promise to unleash more powerful AI chips in the coming years with improved power efficiency. Still, Musk's xAI will likely need to buy millions of Nvidia GPUs to reach his goal. In the meantime, Musk said in another tweet that xAI's Colossus supercomputer in Memphis, Tennessee, has grown to 230,000 GPUs, including 30,000 Nvidia GB200s. His company is also building a second Colossus data center that'll host 550,000 GPUs made up of Nvidia's GB200s and more advanced GB300 chips. This compute power requires enormous amounts of electricity; xAI is using gas turbines at the Colossus site, which environmental groups say are worsening the air pollution in Memphis.
Yahoo
7 days ago
- Business
- Yahoo
Elon Musk: 1M Nvidia GPUs? Nah, My Supercomputers Need the Power of 50M
Elon Musk isn't stopping at acquiring 1 million Nvidia GPUs for AI training. The billionaire wants millions more as his startup xAI races to beat the competition on next-generation AI systems. Musk today tweeted that xAI aims for compute power that's on par with 50 million Nvidia H100 GPUs, the enterprise-grade graphics chip widely used for AI training and running chatbots. "The xAI goal is 50 million in units of H100 equivalent-AI compute (but much better power-efficiency) online within 5 years,' he said. Musk's tweet comes a day after rival Sam Altman, the CEO of OpenAI, wrote in his own post about plans to run 'well over 1 million GPUs by the end of this year,' with the goal of exponentially scaling up the compute power by '100x.' Meta CEO Mark Zuckerberg, meanwhile, has a similar goal; he wants mega data centers devoted to developing AI super intelligence. These growing AI investments underscore how expensive it is to scale up (and attract top talent). Musk's tweet doesn't mean he'll try to buy 50 million GPUs, though. The H100 was introduced in 2022 before Nvidia began offering more powerful models, including in the GB200, which can reportedly deliver an up to 2.5 times performance boost. Nvidia has also released a roadmap that outlines two additional GPU architectures, Rubin and Feynman, which promise to unleash more powerful AI chips in the coming years with improved power efficiency. Still, Musk's xAI will likely need to buy millions of Nvidia GPUs to reach his goal. In the meantime, Musk said in another tweet that xAI's Colossus supercomputer in Memphis, Tennessee, has grown to 230,000 GPUs, including 30,000 Nvidia GB200s. His company is also building a second Colossus data center that'll host 550,000 GPUs made up of Nvidia's GB200s and more advanced GB300 chips. This compute power requires enormous amounts of electricity; xAI is using gas turbines at the Colossus site, which environmental groups say are worsening the air pollution in Memphis.


Time of India
7 days ago
- Business
- Time of India
Elon Musk's xAI eyes $12 billion debt deal to train Grok on Nvidia chips, launch AI supercluster with 550,000 GPUs; details here
Elon Musk's artificial intelligence venture, xAI, is reportedly planning to raise up to $12 billion in debt as part of an ambitious expansion strategy. The funds will primarily be used to lease Nvidia's advanced chips for developing a large-scale AI data center, according to the Wall Street Journal, which cited sources familiar with the discussions. The financing deal is being spearheaded by Valor Equity Partners, an investment firm led by Musk's close associate Antonio Gracias. Talks are ongoing with multiple lenders to support the chip acquisition. Some of the lenders are seeking repayment within three years and want limits placed on the borrowed amount to reduce their financial risk. Grok training on 230,000 GPUs and counting The primary use of the funds will be to buy Nvidia chips, which will then be leased to xAI to power the infrastructure for Grok—xAI's AI chatbot. Musk posted on X that Grok is currently being trained on 230,000 graphics processing units (GPUs), including 30,000 of Nvidia's GB200 chips. Inference operations are being handled by cloud service providers. Cable pr0n of @xAI GB200 servers at Colossus 2 Musk also announced the development of a second supercomputing cluster, which will be launched soon. The new facility will initially operate with a batch of 550,000 Nvidia GB200 and GB300 chips, marking a significant leap in AI infrastructure. The $13 billion question: What xAI will spend in 2025 Estimates in the trade press suggest xAI may spend around $13 billion in 2025, indicating the financial scale required to stay competitive in the AI race. The company's efforts to scale AI hardware and computing capabilities come in direct competition with firms like OpenAI, Alphabet, and China's DeepSeek. The @xAI goal is 50 million in units of H100 equivalent-AI compute (but much better power-efficiency) online within 5 years In July, The Financial Times had reported that xAI was seeking new funding, with a valuation between $170 billion and $200 billion. Musk denied this at the time, stating, 'We have more than enough capital.' Musk's AI hardware vision anchored on Nvidia chips Musk has also stated that xAI's long-term vision involves deploying the equivalent of 50 million Nvidia H100 GPUs over the next five years. The H100 is currently an industry standard for AI training and inference. Musk's plan also includes achieving better power efficiency than Nvidia's flagship chips, suggesting an effort to both scale and innovate. While Nvidia remains central to AI compute globally, Musk's entry and investment reinforce just how high the stakes are in the AI infrastructure race.