logo
#

Latest news with #GPUs

Nvidia Stock: Buy at the Current High?
Nvidia Stock: Buy at the Current High?

Globe and Mail

time5 hours ago

  • Business
  • Globe and Mail

Nvidia Stock: Buy at the Current High?

Nvidia (NASDAQ: NVDA) has proven itself to be at the center of the artificial intelligence (AI) revolution. The company designs the most sought-after AI chips to power the performance of AI models and has expanded into a full range of AI products and services, from networking to enterprise software and even a new compute marketplace offering. All of these efforts have helped Nvidia's earnings roar higher, and the company ended the latest fiscal year at a record revenue level of $130 billion. To further illustrate the pace of growth, investors only have to look back two years. Then, Nvidia's annual revenue totaled $27 billion. Nvidia clearly has been a winner in this AI boom. Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue » This victory extends to stock price performance, with the shares climbing a jaw-dropping 1,500% over the past five years to reach a new high this week. Now the logical question is: Should you buy Nvidia at this high or wait for a lower entry point? Nvidia's role in the AI story Nvidia has played and surely will continue to play a pivotal role in the AI story. Nvidia sells the most powerful graphics processing units (GPUs) on the market and has designed a variety of other products to accompany them. So customers, for example, might use Nvidia GPUs along with its high-speed connection NVLink so processors can share data. Customers may opt for Nvidia application software to build AI agents and various AI workflows, or the company's infrastructure software to manage processes. And just recently, Nvidia launched DGX Cloud Lepton, a marketplace where developers can access GPUs from a variety of connected cloud providers. Thanks to its innovation throughout the AI universe, Nvidia has made itself an almost unavoidable option for most companies aiming to develop and apply AI to their businesses. Importantly, Nvidia also has been first to market with many of its products and services, allowing it to take the lead, and its ongoing innovation and this effort to continually offer customers more service options may keep it there. The main risk It's no surprise that all of this has resulted in soaring earnings -- rising in the double- and triple-digit percentages -- and high profitability on sales. Nvidia has maintained gross margin exceeding 70% during most quarters, only declining to 60% in the recent quarter due to a charge linked to lost sales in China. This leads me to the main risk to Nvidia right now, and that is its presence in that particular market, one that made up 13% of sales last year. The U.S. has imposed controls on exports of chips to China, blocking Nvidia's access to that market. The move prompted Nvidia to remove China from its sales forecasts due to being unable to predict what might happen. Nvidia surely would see higher growth if it could sell chips to China, but even without that market, growth is solid. It's important to remember that U.S. customers actually make up nearly half of Nvidia's total sales. Even in the worst scenario -- zero sales in China -- Nvidia's AI growth story remains bright. Is Nvidia stock a buy now? Even with growth going strong and the future looking bright, investors might wonder if buying Nvidia now, at a new high, is a good idea. The stock trades for 35 times forward earnings estimates, higher than a few weeks ago, but lower than a peak of more than 50 just a few months ago. Considering Nvidia's earnings track record, market position, and future prospects, this looks like a reasonable price -- even if it's not at the dirt cheap levels of a few weeks ago. Of course, stocks rarely rise in one straight line, so there very well could be a dip in the weeks or months to come, offering an even more enticing entry point. But it's very difficult to time the market and get in at any stock's lowest point. It's a better idea to buy at a reasonable price and hold on for the long term. And here's why: Nvidia's gains or losses over a period of weeks or one quarter, for example, won't make much of a difference in your returns if you hold onto the stock for several years. That's why you don't necessarily have to worry about buying at the high when you're a long-term investor, as long as the stock's valuation is fair. That's the case of top AI stock Nvidia right now, making it a buy -- even at the high. Should you invest $1,000 in Nvidia right now? Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $704,676!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $950,198!* Now, it's worth noting Stock Advisor 's total average return is1,048% — a market-crushing outperformance compared to175%for the S&P 500. Don't miss out on the latest top 10 list, available when you join Stock Advisor. See the 10 stocks » *Stock Advisor returns as of June 23, 2025

OpenAI Taps Google's AI Chips in Strategic Shift Away from Nvidia Dependency
OpenAI Taps Google's AI Chips in Strategic Shift Away from Nvidia Dependency

Hans India

time7 hours ago

  • Business
  • Hans India

OpenAI Taps Google's AI Chips in Strategic Shift Away from Nvidia Dependency

In a significant move within the AI landscape, OpenAI, the Microsoft-backed creator of ChatGPT, has reportedly begun utilizing Google's artificial intelligence chips. According to a recent report by Reuters, this development points to OpenAI's efforts to diversify its chip suppliers and reduce its dependency on Nvidia, which currently dominates the AI hardware market. OpenAI has historically been one of the largest buyers of Nvidia's graphics processing units (GPUs), using them extensively for both training its AI models and performing inference tasks — where the model applies learned data to generate outputs. However, as demand for computing power surges, OpenAI is now exploring alternatives. The Reuters report, citing a source familiar with the matter, claims that OpenAI has started using Google's Tensor Processing Units (TPUs), marking a notable shift not only in its hardware strategy but also in its reliance on cloud services. Earlier this month, Reuters had already suggested that OpenAI was planning to leverage Google Cloud to help meet its growing computational needs. What makes this collaboration remarkable is the competitive context. Google and OpenAI are direct rivals in the AI field, both vying for leadership in generative AI and large language model development. Yet, this partnership demonstrates how shared interests in infrastructure efficiency and cost management can bridge even the most competitive divides. According to The Information, this is OpenAI's first major deployment of non-Nvidia chips, indicating a deliberate effort to explore alternative computing platforms. By leasing Google's TPUs through Google Cloud, OpenAI is reportedly looking to reduce inference costs — a crucial factor as AI services like ChatGPT continue to scale. The move is also part of a broader trend at Google. Historically, the tech giant has reserved its proprietary TPUs mainly for internal projects. However, it appears Google is now actively expanding external access to these chips in a bid to grow its cloud business. This strategy has reportedly attracted several high-profile clients, including Apple and AI startups like Anthropic and Safe Superintelligence — both founded by former OpenAI employees and seen as emerging competitors. A Google Cloud employee told The Information that OpenAI is not being offered Google's latest-generation TPUs, suggesting the company is balancing business expansion with competitive caution. Still, the fact that OpenAI is now a customer illustrates Google's ambition to grow its end-to-end AI ecosystem — from hardware and software to cloud services — even if that means partnering with direct rivals. Neither Google nor OpenAI has issued official statements confirming the deal. Yet, the development signals an evolving AI infrastructure market where flexibility, cost-efficiency, and compute availability are becoming more strategic than ever. As the race to power the future of AI intensifies, such cross-competitive collaborations could become more commonplace — redefining how major players navigate both cooperation and competition in the era of intelligent computing.

AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do
AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do

Forbes

time15 hours ago

  • Business
  • Forbes

AMD Keeps Building Momentum In AI, With Plenty Of Work Still To Do

At the AMD Advancing AI event, CEO Lisa Su touted the company's AI compute portfolio. At the AMD Advancing AI event in San Jose earlier this month, CEO Lisa Su and her staff showcased the company's progress across many different facets of AI. They had plenty to announce in both hardware and software, including significant performance gains for GPUs, ongoing advances in the ROCm development platform and the forthcoming introduction of rack-scale infrastructure. There were also many references to trust and strong relationships with customers and partners, which I liked, and a lot of emphasis on open hardware and an open development ecosystem, which I think is less of a clear winner for AMD, as I'll explain later. Overall, I think the event was important for showing how AMD is moving the ball down the field for customers and developers. Under Su, AMD's M.O. is to have clear, ambitious plans and execute against them. Her 'say/do' ratio is high. The company does what it says it will do. This is exactly what it must continue doing to whittle away at Nvidia's dominance in the datacenter AI GPU market. What I saw at the Advancing AI event raised my confidence from last year — although there are a few gaps that need to be addressed. (Note: AMD is an advisory client of my firm, Moor Insights & Strategy.) AMD's AI Market Opportunity And Full-Stack Strategy When she took the stage, Su established the context for AMD's announcements by describing the staggering growth that is the backdrop for today's AI chip market. Just take a look at the chart below. So far, AMD's bullish projections for the growth of the AI chip market have turned out to be ... More accurate. So this segment of the chip industry is looking at a TAM of half a trillion dollars by 2028, with the whole AI accelerator market increasing at a 60% CAGR. The AI inference sub-segment — where AMD competes on better footing with Nvidia — is enjoying an 80% CAGR. People thought that the market numbers AMD cited last year were too high, but not so. This is the world we're living in. For the record, I never doubted the TAM numbers last year. AMD is carving out a bigger place in this world for itself. As Su pointed out, its Instinct GPUs are used by seven of the 10 largest AI companies, and they drive AI for Microsoft Office, Facebook, Zoom, Netflix, Uber, Salesforce and SAP. Its EPYC server CPUs continue to put up record market share (40% last quarter), and it has built out a full stack — partly through smart acquisitions — to support its AI ambitions. I would point in particular to the ZT Systems acquisition and the introduction of the Pensando DPU and the Pollara NIC. GPUs are at the heart of datacenter AI, and AMD's new MI350 series was in the spotlight at this event. Although these chips were slated to ship in Q3, Su said that production shipments had in fact started earlier in June, with partners on track to launch platforms and public cloud instances in Q3. There were cheers from the crowd when they heard that the MI350 delivers a 4x performance improvement over the prior generation. AMD says that its high-end MI355X GPU outperforms the Nvidia B200 to the tune of 1.6x memory, 2.2x compute throughput and 40% more tokens per dollar. (Testing by my company Signal65 showed that the MI355X running DeepSeek-R1 produced up to 1.5x higher throughput than the B200.) To put it in a different perspective, a single MI355X can run a 520-billion-parameter model. And I wasn't surprised when Su and others onstage looked ahead to even better performance — maybe 10x better — projected for the MI400 series and beyond. That puts us into the dreamland of an individual GPU running a trillion-parameter model. By the way, AMD has not forgotten for one second that it is a CPU company. The EPYC Venice processor scheduled to hit the market in 2026 should be better at absolutely everything — 256 high-performance cores, 70% more compute performance than the current generation and so on. EPYC's rapid gains in datacenter market share over the past few years are no accident, and at this point all the company needs to do for CPUs is hold steady on its current up-and-to-the-right trajectory. I am hopeful that Signal65 will get a crack at testing the claims the company made at the event. This level of performance is needed in the era of agentic AI and a landscape of many competing and complementary AI models. Su predicts — and I agree — that there will be hundreds of thousands of specialized AI models in the coming years. This is specifically true for enterprises that will have smaller models focused on areas like CRM, ERP, SCM, HCM, legal, finance and so on. To support this, AMD talked at the event about its plan to sustain an annual cadence of Instinct accelerators, adding a new generation every year. Easy to say, hard to do — though, again, AMD has a high say/do ratio these days. AMD's 2026 Rack-Scale Platform And Current Software Advances On the hardware side, the biggest announcement was the forthcoming Helios rack-scale GPU product that AMD plans to deliver in 2026. This is a big deal, and I want to emphasize how difficult it is to bring together high-performing CPUs (EPYC Venice), GPUs (MI400) and networking chips (next-gen Pensando Vulcano NICs) in a liquid-cooled rack. It's also an excellent way to take on Nvidia, which makes a mint off of its own rack-scale offerings for AI. At the event, Su said she believes that Helios will be the new industry standard when it launches next year (and cited a string of specs and performance numbers to back that up). It's good to see AMD provide a roadmap this far out, but it also had to after Nvidia did at the GTC event earlier this year. On the software side, Vamsi Boppana, senior vice president of the Artificial Intelligence Group at AMD, started off by announcing the arrival of ROCm 7, the latest version of the company's open source software platform for GPUs. Again, big improvements come with each generation — in this case, a 3.5x gain in inference performance compared to ROCm 6. Boppana stressed the very high cadence of updates for AMD software, with new features being released every two weeks. He also talked about the benefits of distributed inference, which allows the two steps of inference to be tasked to separate GPU pools, further speeding up the process. Finally, he announced — to a chorus of cheers — the AMD Developer Cloud, which makes AMD GPUs accessible from anywhere so developers can use them to test-drive their ideas. Last year, Meta had kind things to say about ROCm, and I was impressed because Meta is the hardest 'grader' next to Microsoft. This year, I heard companies talking about both training and inference, and again I'm impressed. (More on that below.) It was also great getting some time with Anush Elangovan, vice president for AI software at AMD, for a video I shot with him. Elangovan is very hardcore, which is exactly what AMD needs. Real grinders. Nightly code drops. What's Working Well For AMD in AI So that's (most of) what was new at AMD Advancing AI. In the next three sections, I want to talk about the good, the needs-improvement and the yet-to-be-determined aspects of what I heard during the event. Let's start with the good things that jumped out at me. What Didn't Work For Me At Advancing AI While overall I thought Advancing AI was a win for AMD, there were two areas where I thought the company missed the mark — one by omission, one by commission. The Jury Is Out On Some Elements Of AMD's AI Strategy In some areas, I suspect that AMD is doing okay or will be doing okay soon — but I'm just not sure. I can't imagine that any of the following items has completely escaped AMD's attention, but I would recommend that the company address them candidly so that customers know what to expect and can maintain high confidence in what AMD is delivering. What Comes Next In AMD's AI Development It is very difficult to engineer cutting-edge semiconductors — let alone rack-scale systems and all the attendant software — on the steady cadence that AMD is maintaining. So kudos to Su and everyone else at the company who's making that happen. But my confidence (and Wall Street's) would rise if AMD provided more granularity about what it's doing, starting with datacenter GPU forecasts. Clearly, AMD doesn't need to compete with Nvidia on every single thing to be successful. But it would be well served to fill in some of the gaps in its story to better speak to the comprehensive ecosystem it's creating. Having spent plenty of time working inside companies on both the OEM and semiconductor sides, I do understand the difficulties AMD faces in providing that kind of clarity. The process of landing design wins can be lumpy, and a few of the non-AMD speakers at Advancing AI mentioned that the company is engaged in the 'bake-offs' that are inevitable in that process. Meanwhile, we're left to wonder what might be holding things back, other than AMD's institutional conservatism — the healthy reticence of engineers not to make any claims until they're sure of the win. That said, with Nvidia's B200s sold out for the next year, you'd think that AMD should be able to sell every wafer it makes, right? So are AMD's yields not good enough yet? Or are hyperscalers having their own problems scaling and deploying? Is there some other gating item? I'd love to know. Please don't take any of my questions the wrong way, because AMD is doing some amazing things, and I walked away from the Advancing AI event impressed with the company's progress. At the show, Su was forthright about describing the pace of this AI revolution we're living in — 'unlike anything we've seen in modern computing, anything we've seen in our careers, and frankly, anything we've seen in our lifetime.' I'll keep looking for answers to my nagging questions, and I'm eager to see how the competition between AMD and Nvidia plays out over the next two years and beyond. Meanwhile, AMD moved down the field at its event, and I look forward to seeing where it is headed.

Artificial Intelligence (AI) Titan Nvidia Has Scored a $4 Billion "Profit" in an Unexpected Way
Artificial Intelligence (AI) Titan Nvidia Has Scored a $4 Billion "Profit" in an Unexpected Way

Yahoo

time18 hours ago

  • Business
  • Yahoo

Artificial Intelligence (AI) Titan Nvidia Has Scored a $4 Billion "Profit" in an Unexpected Way

Artificial intelligence (AI) is Wall Street's hottest trend, with graphics processing unit (GPU) colossus Nvidia at the heart of this revolution. However, Nvidia is also an investor, with a portfolio of six stocks worth more than $1.1 billion at the end of March. Nvidia's largest investment holding has rapidly climbed in value, but may already be in a bubble. 10 stocks we like better than Nvidia › For more than two years, no trend has been held in higher regard on Wall Street than the evolution of artificial intelligence (AI). With AI, software and systems are capable of making split-second decisions, overseeing generative AI solutions, and training large language models (LLMs), all without the need for human oversight. The long-term potential for this game-changing technology is truly jaw-dropping. If the analysts at PwC are correct, a combination of consumption-side effects and productivity improvements from AI will add $15.7 trillion to the global economy by the turn of the decade. Although a long list of hardware and software/system application companies have benefited immensely from the AI revolution, none stands out more than tech titan Nvidia (NASDAQ: NVDA). But what you might be surprised to learn is that this highly influential AI company has scored a $4 billion "profit" in an uncharacteristic manner. It took less than two years for Nvidia to catapult from a $360 billion market cap to (briefly) the world's largest public company, with a valuation that handily surpassed $3.5 trillion. A $3 trillion-plus increase in valuation in such a short time frame had never been witnessed before. Nvidia's claim to fame is its Hopper (H100) and next-generation Blackwell graphics processing units (GPUs), which are the undisputed top options deployed in AI-accelerated data centers. Orders for both chips have been extensively backlogged, despite the efforts of world-leading chip fabrication company Taiwan Semiconductor Manufacturing to boost its chip-on-wafer-on-substrate monthly wafer capacity. When demand for a good or service outstrips its supply, the law of supply and-demand states that prices will climb until demand tapers. Whereas direct rival Advanced Micro Devices was netting anywhere from $10,000 to $15,000 for its Instinct MI300X AI-accelerating chip early last year, Nvidia's Hopper chips were commanding a price point that topped $40,000. The ability to charge a premium for its AI hardware, due to a combination of strong demand and persistent AI-GPU scarcity, helped push Nvidia's gross margin into the 70% range. Nvidia CEO Jensen Huang is also intent on keeping his company at the forefront of the innovative curve. He's aiming to bring a new advanced chip to market each year, with Blackwell Ultra (2025), Vera Rubin (2026), and Vera Rubin Ultra (2027) set to follow in the path of Hopper and Blackwell. In other words, it doesn't appear as if Nvidia will cede its compute advantages anytime soon. The final piece of the puzzle for Nvidia has been its CUDA software platform. This is what assists developers in maximizing the compute abilities of their Nvidia GPUs, as well as aids with building/training LLMs. CUDA has played a pivotal role in keeping clients loyal to Nvidia's ecosystem of products and services. Collectively, Nvidia's data center segment has helped catapult sales by 383% between fiscal 2023 (ended in late January 2023) and fiscal 2025, and sent adjusted net income skyrocketing from $8.4 billion to $74.3 billion over the same timeline. As you can imagine, most of Nvidia's more than $74 billion in adjusted net income last year was derived from its operating activities -- and this is how it should be for a market-leading growth stock. But it's not the only way Wall Street's AI darling can put dollars in the profit column. What's often overlooked about Nvidia is that it's also an investor. Just as institutional money managers with more than $100 million in assets under management (AUM) are required to file Form 13F no later than 45 days following the end to a quarter -- a 13F lays out which stocks, exchange-traded funds (ETFs), and select options were purchased and sold -- businesses with north of $100 million in AUM must do the same. This includes Nvidia. At the end of March, Nvidia had more than $1.1 billion invested across a half-dozen publicly traded companies. Accounting rules require Nvidia to recognize unrealized gains and losses each quarter, based on the change in value of the securities in its investment portfolio. Nvidia's largest investment holding is AI-data center infrastructure goliath CoreWeave (NASDAQ: CRWV), which went public in late March. Nvidia made an initial investment in CoreWeave of $100 million in April 2023, and upped its stake by another $250 million in March 2025, prior to its initial public offering (IPO). On a combined basis, Nvidia has put $350 million of its capital to work in Wall Street's hottest IPO. As of the closing bell on Friday, June 20, the 24,182,460 shares of CoreWeave that Nvidia held, as of March 31, were worth (drumroll) close to $4.44 billion. On an unrealized basis, Wall Street's AI titan is sitting on a $4 billion-plus "profit" from its investment. If you're wondering why "profit" is in quotations, it's because Nvidia may have reduced its stake in CoreWeave since the second quarter began. We won't know for sure until 13Fs detailing second-quarter trading activity are filed in mid-August. Further, this $4 billion unrealized gain can fluctuate, depending on where CoreWeave stock closes out the June quarter. Nevertheless, it's been one heck of a windfall for Nvidia. While Nvidia has a solid track record of making smart investments in up-and-coming tech companies -- many of which it's partnered with -- there's also the real possibility it's playing with fire when it comes to CoreWeave. Don't get me wrong, CoreWeave has been a fantastic client for Nvidia. It purchased 250,000 Hopper GPUs for its AI-data centers, which it leases out to businesses looking for compute capacity. It's in Nvidia's best interest that CoreWeave succeed and upgrade its AI chips roughly twice per decade. But there are a number of red flags with CoreWeave that suggest its $88 billion valuation isn't sustainable. One of the biggest concerns with Wall Street's hottest IPO is that Nvidia's aggressive innovation cycles could hinder, not help, its business. Bringing an advanced AI chip to market annually has the potential to quickly depreciate CoreWeave's Hopper GPUs, and might send customers to rival data centers that have newer chips. When CoreWeave looks to upgrade its infrastructure in the coming years, there's a very good chance it'll recoup far less from its assets than it expects. CoreWeave has also leaned on leverage to build out its AI-data center. Relying on debt to acquire GPUs can lead to burdensome debt-servicing costs. For the moment, these servicing costs are adding to the company's steep operating losses. Valuation is another clear concern with CoreWeave. Investors are paying roughly 8 times forecast sales in 2026 for a company that's not time-tested and hasn't generated a profit. While Nvidia, undoubtedly, wants to see CoreWeave succeed, locking in its gains at these levels would make a lot of sense. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $689,813!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $906,556!* Now, it's worth noting Stock Advisor's total average return is 809% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Sean Williams has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy. Artificial Intelligence (AI) Titan Nvidia Has Scored a $4 Billion "Profit" in an Unexpected Way was originally published by The Motley Fool

Will Nvidia Hit $200 Per Share by 2026?
Will Nvidia Hit $200 Per Share by 2026?

Yahoo

time18 hours ago

  • Business
  • Yahoo

Will Nvidia Hit $200 Per Share by 2026?

Nvidia's growth may be slowing, but it's still rapid. Demand for AI computing hardware continues to increase. Nvidia would be nearly a $5 trillion company if it hits $200 per share. 10 stocks we like better than Nvidia › Nvidia (NASDAQ: NVDA) has remained one of the most popular stocks in the market even after its unprecedented run-up since 2023. The demand for Nvidia's best-in-class graphics processing units (GPUs) hasn't let up because AI computing capacity hasn't come close to being fulfilled. There's plenty of upside left in the stock, but the next milestone is a $200 share price. Currently, Nvidia hovers around $145, but it has broken $150 before. This means the stock needs to rise about 40% to reach $200, but can it do that by the end of 2026? A $200 share price would mean that it would have a market cap close to $5 trillion. There's never been a $4 trillion company, let alone a $5 trillion one. So if it hits $200 and continues to rise just a bit, it would make a record along the way. But that's what the stock has been about over the past few years. We've never seen a company of Nvidia's size grow as rapidly as it has, let alone sustain that growth over a three-year time span. In the 2026 fiscal first quarter (ended April 28), revenue rose 69% year over year to $44.1 billion, and second-quarter growth is expected to be about 50% year over year. This is all because of the vast demand it is experiencing. GPUs have become the computing hardware of choice for AI models, mainly due to their ability to handle intense workloads. They can process multiple calculations in parallel, and units can be combined in clusters to amplify that effect. This allows these data centers to train AI models on vast data sets that would take years for a traditional PC to process. Even though demand for Nvidia's GPUs is already high, it's expected to increase even more over the next few years. The AI hyperscalers have announced record spending for this year. But building a data center is a multiyear task. As a result, investors shouldn't be surprised if the AI hyperscalers announce further increases over this year's already elevated levels. This backs up a third-party projection management cited during its 2025 GTC event that data center capital expenditures were $400 billion in 2024 and are expected to increase to $1 trillion by 2028. If this comes true, the chipmaker's rapid growth will continue. Nvidia's business has the fuel to continue growing, but can it hit $200 per share by 2026? There are a few ways to calculate a future stock price. You could start at $200 and see what growth assumptions are built into that, or you could assume a valuation and apply expected increases, then check to see if that works with the stock price target. I prefer the latter method since it allows you to get an estimated stock price, even if it isn't the answer you expected. Wall Street analysts expect $200 billion in revenue for fiscal 2026 and nearly $250 billion for fiscal 2027, indicating 53% and 25% revenue growth, respectively. Should the $250 billion revenue increase occur and the company has a profit margin of 50% (it's currently 52% but has been as high as 55%), then it would produce $125 billion in profit. Nvidia's share count has decreased over the past few years, but let's assume that today's share count is the same by the end of 2026. If that were the case, earnings per share (EPS) would be $5.12. Lastly, let's examine Nvidia's historical valuation to determine whether a $200 share price is reasonable. Nvidia's stock currently trades at 46 times trailing earnings. If we bumped that multiple down to 40 (to compensate for slowing growth), the stock price at the end of next year would be about $205. So, even with a few conservative estimates baked into the stock price (falling margins and a decreasing earnings multiple), Nvidia's stock would still have the growth necessary to hit $200 by the end of 2026. That makes it a smart buy, since the returns it would provide investors over the next year will likely crush the market. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $689,813!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $906,556!* Now, it's worth noting Stock Advisor's total average return is 809% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Keithen Drury has positions in Nvidia. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy. Will Nvidia Hit $200 Per Share by 2026? was originally published by The Motley Fool Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store