Latest news with #InstinctMI300X
Yahoo
5 days ago
- Business
- Yahoo
Artificial Intelligence (AI) Titan Nvidia Has Scored a $4 Billion "Profit" in an Unexpected Way
Artificial intelligence (AI) is Wall Street's hottest trend, with graphics processing unit (GPU) colossus Nvidia at the heart of this revolution. However, Nvidia is also an investor, with a portfolio of six stocks worth more than $1.1 billion at the end of March. Nvidia's largest investment holding has rapidly climbed in value, but may already be in a bubble. 10 stocks we like better than Nvidia › For more than two years, no trend has been held in higher regard on Wall Street than the evolution of artificial intelligence (AI). With AI, software and systems are capable of making split-second decisions, overseeing generative AI solutions, and training large language models (LLMs), all without the need for human oversight. The long-term potential for this game-changing technology is truly jaw-dropping. If the analysts at PwC are correct, a combination of consumption-side effects and productivity improvements from AI will add $15.7 trillion to the global economy by the turn of the decade. Although a long list of hardware and software/system application companies have benefited immensely from the AI revolution, none stands out more than tech titan Nvidia (NASDAQ: NVDA). But what you might be surprised to learn is that this highly influential AI company has scored a $4 billion "profit" in an uncharacteristic manner. It took less than two years for Nvidia to catapult from a $360 billion market cap to (briefly) the world's largest public company, with a valuation that handily surpassed $3.5 trillion. A $3 trillion-plus increase in valuation in such a short time frame had never been witnessed before. Nvidia's claim to fame is its Hopper (H100) and next-generation Blackwell graphics processing units (GPUs), which are the undisputed top options deployed in AI-accelerated data centers. Orders for both chips have been extensively backlogged, despite the efforts of world-leading chip fabrication company Taiwan Semiconductor Manufacturing to boost its chip-on-wafer-on-substrate monthly wafer capacity. When demand for a good or service outstrips its supply, the law of supply and-demand states that prices will climb until demand tapers. Whereas direct rival Advanced Micro Devices was netting anywhere from $10,000 to $15,000 for its Instinct MI300X AI-accelerating chip early last year, Nvidia's Hopper chips were commanding a price point that topped $40,000. The ability to charge a premium for its AI hardware, due to a combination of strong demand and persistent AI-GPU scarcity, helped push Nvidia's gross margin into the 70% range. Nvidia CEO Jensen Huang is also intent on keeping his company at the forefront of the innovative curve. He's aiming to bring a new advanced chip to market each year, with Blackwell Ultra (2025), Vera Rubin (2026), and Vera Rubin Ultra (2027) set to follow in the path of Hopper and Blackwell. In other words, it doesn't appear as if Nvidia will cede its compute advantages anytime soon. The final piece of the puzzle for Nvidia has been its CUDA software platform. This is what assists developers in maximizing the compute abilities of their Nvidia GPUs, as well as aids with building/training LLMs. CUDA has played a pivotal role in keeping clients loyal to Nvidia's ecosystem of products and services. Collectively, Nvidia's data center segment has helped catapult sales by 383% between fiscal 2023 (ended in late January 2023) and fiscal 2025, and sent adjusted net income skyrocketing from $8.4 billion to $74.3 billion over the same timeline. As you can imagine, most of Nvidia's more than $74 billion in adjusted net income last year was derived from its operating activities -- and this is how it should be for a market-leading growth stock. But it's not the only way Wall Street's AI darling can put dollars in the profit column. What's often overlooked about Nvidia is that it's also an investor. Just as institutional money managers with more than $100 million in assets under management (AUM) are required to file Form 13F no later than 45 days following the end to a quarter -- a 13F lays out which stocks, exchange-traded funds (ETFs), and select options were purchased and sold -- businesses with north of $100 million in AUM must do the same. This includes Nvidia. At the end of March, Nvidia had more than $1.1 billion invested across a half-dozen publicly traded companies. Accounting rules require Nvidia to recognize unrealized gains and losses each quarter, based on the change in value of the securities in its investment portfolio. Nvidia's largest investment holding is AI-data center infrastructure goliath CoreWeave (NASDAQ: CRWV), which went public in late March. Nvidia made an initial investment in CoreWeave of $100 million in April 2023, and upped its stake by another $250 million in March 2025, prior to its initial public offering (IPO). On a combined basis, Nvidia has put $350 million of its capital to work in Wall Street's hottest IPO. As of the closing bell on Friday, June 20, the 24,182,460 shares of CoreWeave that Nvidia held, as of March 31, were worth (drumroll) close to $4.44 billion. On an unrealized basis, Wall Street's AI titan is sitting on a $4 billion-plus "profit" from its investment. If you're wondering why "profit" is in quotations, it's because Nvidia may have reduced its stake in CoreWeave since the second quarter began. We won't know for sure until 13Fs detailing second-quarter trading activity are filed in mid-August. Further, this $4 billion unrealized gain can fluctuate, depending on where CoreWeave stock closes out the June quarter. Nevertheless, it's been one heck of a windfall for Nvidia. While Nvidia has a solid track record of making smart investments in up-and-coming tech companies -- many of which it's partnered with -- there's also the real possibility it's playing with fire when it comes to CoreWeave. Don't get me wrong, CoreWeave has been a fantastic client for Nvidia. It purchased 250,000 Hopper GPUs for its AI-data centers, which it leases out to businesses looking for compute capacity. It's in Nvidia's best interest that CoreWeave succeed and upgrade its AI chips roughly twice per decade. But there are a number of red flags with CoreWeave that suggest its $88 billion valuation isn't sustainable. One of the biggest concerns with Wall Street's hottest IPO is that Nvidia's aggressive innovation cycles could hinder, not help, its business. Bringing an advanced AI chip to market annually has the potential to quickly depreciate CoreWeave's Hopper GPUs, and might send customers to rival data centers that have newer chips. When CoreWeave looks to upgrade its infrastructure in the coming years, there's a very good chance it'll recoup far less from its assets than it expects. CoreWeave has also leaned on leverage to build out its AI-data center. Relying on debt to acquire GPUs can lead to burdensome debt-servicing costs. For the moment, these servicing costs are adding to the company's steep operating losses. Valuation is another clear concern with CoreWeave. Investors are paying roughly 8 times forecast sales in 2026 for a company that's not time-tested and hasn't generated a profit. While Nvidia, undoubtedly, wants to see CoreWeave succeed, locking in its gains at these levels would make a lot of sense. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $689,813!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $906,556!* Now, it's worth noting Stock Advisor's total average return is 809% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Sean Williams has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy. Artificial Intelligence (AI) Titan Nvidia Has Scored a $4 Billion "Profit" in an Unexpected Way was originally published by The Motley Fool
Yahoo
20-05-2025
- Business
- Yahoo
GPU as a Service Market Analysis by Service Model, GPU Type, Deployment, Enterprise Type - Global Forecast to 2030
The report will help the market leaders/new entrants in this market with information on the closest approximations of the revenue numbers for the overall GPU as a Service market and the subsegments. This report will help stakeholders understand the competitive landscape and gain more insights to position their businesses better and plan suitable go-to-market strategies. The report also helps stakeholders understand the pulse of the market and provides them with information on key market drivers, restraints, challenges, and opportunities. GPU as a Service Market Dublin, May 13, 2025 (GLOBE NEWSWIRE) -- The "GPU as a Service Market by Service Model (IaaS, PaaS), GPU Type (High-End GPUs, Mid-Range GPUs, Low-End GPUs), Deployment (Public Cloud, Private Cloud, Hybrid Cloud), Enterprise Type (Large Enterprises, SMEs) - Global Forecast to 2030" report has been added to GPU as a Service market is expected to be worth USD 8.21 billion in 2025 and is estimated to reach USD 26.62 billion by 2030, growing at a CAGR of 26.5% between 2025 and 2030 The growth of the GPU as a Service market is driven by increasing demand for high-performance GPUs in video rendering, 3D content creation, and real-time applications. Industries like gaming, film production, and architecture require scalable and cost-effective GPU solutions for complex visual effects (VFX) and simulations. GPUaaS eliminates the need for expensive on-premises GPU clusters, providing on-demand access to cloud resources. Additionally, the rise of real-time rendering engines like Unreal Engine 5 and AI-driven content generation further accelerates market growth, enabling immersive virtual experiences and reducing production timelines for studios, developers, and content GPU segment to have highest CAGR in the forecasted high-end GPU segment will witness a rapid growth in the GPU as a Service market driven by increasing requirement for accelerated computation in AI, ML, and complicated simulations. High-end GPUs like NVIDIA's H100 Tensor Core GPUs and AMD's Instinct MI300X provide immense computing capabilities making them suitable to train large language models (LLMs) and generative AI applications. For example, Amazon Web Services (AWS) provides EC2 UltraClusters with NVIDIA H100 GPUs to support trillion-parameter AI Microsoft Azure and Google Cloud integrate high-end GPUs to provide scalable AI infrastructure for enterprises. The film and gaming industries are also contributing to this growth, using high-end GPUs for real-time rendering, special effects (VFX), and immersive virtual experiences. Platforms such as Epic Games' Unreal Engine 5 utilize GPUaaS for photorealistic virtual productions. Also, sectors such as healthcare and scientific research utilize GPUaaS for drug discovery and medical imaging analysis. With increased adoption of AI across industries, businesses opt for high-end GPUs to address growing computational needs. The flexible pay-as-you-go cloud model provides greater access to such powerful assets, further increasing the growth of the high-end GPU Enterprise Type- Large Enterprises segment will hold largest market share of GPU as a Service market in 2030The large enterprise segment will hold the highest market share within the GPU as a Service market given their high computing requirements and widespread AI deployment. Multinational conglomerates, Fortune 500 firms and industry titans from sectors such as healthcare, finance, automotive, and media utilize GPUaaS for AI applications such as medical imaging, drug discovery, fraud detection, and real-time analytics to a large extent. The need for scalable GPU resources to manage complex workloads, including large language model (LLM) training and algorithmic trading, drives this service providers offer tailored solutions with dedicated GPU clusters, high-bandwidth networking, and enterprise-grade security to meet the customization needs of large enterprises. In addition, the scalability of multi-cloud and hybrid cloud deployments allows companies to optimize costs while ensuring low latency and high availability. Enterprises benefit from long-term contracts, constructing predictable GPU usage costs and gaining access to the latest GPU technology. Industries with mission-critical applications often allocate dedicated GPU resources for activities such as autonomous car development and financial modeling. With increasing AI adoption and rising dependence on data-driven decisions, the large corporations will continue to rule the GPUaaS market, leveraging its flexibility and cost-effectiveness for scalable Pacific is expected to hold high CAGR in during the forecast Pacific is expected to grow significantly in the GPU as a Service market as a result of accelerating growth in cloud computing, rising adoption of AI, and heavy investments in data center infrastructure. The growth is being led by China, Japan, South Korea, and India through government initiatives, private investment, and technological innovations. For instance, In May 2023, the Chinese government made plans to construct AI industrial bases, driving AI research. Moreover, policies such as the Shenzhen AI Regulation support AI adoption by pushing public data sharing and corporate innovation. Japan is seeing huge investments in AI invested USD 2.9 billion in Japan's cloud and AI infrastructure in April 2024, and Oracle pledged USD 8 billion to build cloud data centers. These projects give businesses access to scalable GPU capacity for AI applications. India is also moving ahead with GPUaaS adoption with its IndiaAI initiative. In March 2024, the Indian government sanctioned USD 124 billion in investments to deploy more than 10,000 GPUs, enabling AI research and startups. These strategic investments and efforts make Asia Pacific a high-growth region in the GPUaaS landscape The report profiles key players in the GPU as a Service market with their respective market ranking analysis. Prominent players profiled in this report are Amazon web Servies, Inc. (US), Microsoft (US), Google (US), Oracle (US), IBM (US), Coreweave (US), Alibaba Cloud (China), Lambda (US), Tencent Cloud (China), (India), among from this, Fluidstack (UK), OVH SAS (France), E2E Networks Limited (India), RunPod (US), ScaleMatrix Holdings, Inc. (US), (US), AceCloud (India), Snowcell (Norway), Linode LLC. (US), Yotta Infrastructure (India), VULTR (US), DigitalOcean, LLC. (US), Rackspace Technology (US), Gcore (Luxembourg), and Nebius B.V. (Amsterdam), are among a few emerging companies in the GPU as a Service Attributes: Report Attribute Details No. of Pages 282 Forecast Period 2025 - 2030 Estimated Market Value (USD) in 2025 $8.21 Billion Forecasted Market Value (USD) by 2030 $26.62 Billion Compound Annual Growth Rate 26.5% Regions Covered Global Market Dynamics Drivers Surging Use of Cloud-Powered AI, ML, and Dl Frameworks Increasing Need for Budget-Friendly Yet High-Performance GPU Solutions from Enterprises Growing Deployment of GPU as a Service Model in Gaming and Virtualization Applications Restraints Supply Chain Bottlenecks and AI Demand Dynamics Opportunities Revolutionizing Media Production Workflows Increasing Investments in AI Infrastructure by Cloud Service Providers Rise of Pure-Play GPU Companies Challenges Managing High Power Consumption and Cooling Needs in Cloud Gpus Confronting Security, Performance, and Scalability Challenges in Multi-Tenant Environments Case Study Analysis Nearmap Reduces Computing Cost and Increases Data Processing Capacity Using Amazon Ec2 G4 Instances Soluna Deploys GPU Cloud Management Software to Boost Its Marketplace Reach Computer Vision Technology Company Increases GPU Utilization to Improve Productivity and Reduce Dl Training Time Epfl Optimizes AI Infrastructure to Prioritize Workload Demands Using Run:AI's GPU Orchestration Platform For more information about this report visit About is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends. Attachment GPU as a Service Market CONTACT: CONTACT: Laura Wood,Senior Press Manager press@ For E.S.T Office Hours Call 1-917-300-0470 For U.S./ CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900
Yahoo
25-04-2025
- Business
- Yahoo
TSMC mulls massive 1000W-class multi-chiplet processors with 40X the performance of standard models
When you buy through links on our articles, Future and its syndication partners may earn a commission. You might often think of processors as being relatively small, but TSMC is developing a version of its CoWoS technology that will enable its partners to build multi-chiplet assemblies that will be 9.5-reticle sized (7,885 mm^2) and will rely on 120×150 mm substrates (18,000 mm^2), which is slightly larger than the size of a CD case. TSMC claims these behemoths could offer up to 40 times the performance of a standard processor. Virtually all modern high-performance data center-grade processors use multi-chiplet designs, and as demands for performance increase, developers want to integrate even more silicon into their products. In an effort to meet demand, TSMC is enhancing its packaging capabilities to support significantly larger chip assemblies for high-performance computing and AI applications. At its North American Technology Symposium, TSMC unveiled its new 3DFabric roadmap, which aims to scale interposer sizes well beyond current limits. Currently, TSMC CoWoS offers chip packaging solutions that enable interposer sizes of up to 2831 mm^2, which is approximately 3.3 times larger than the company's reticle (photomask) size limit (858 mm^2 per EUV standard, with TSMC using 830 mm^2). This capacity is already utilized by products like AMD's Instinct MI300X accelerators and Nvidia's B200 GPUs, which combine two large logic chiplets for compute with eight stacks of HBM3 or HBM3E memory. But that's not enough for future applications. Sometimes next year, or a bit later, TSMC plans to introduce the next generation of its CoWoS-L packaging technology, which will support interposers measuring up to 4,719 mm^2, roughly 5.5 times larger than the standard reticle area. The package will accommodate up to 12 stacks of high-bandwidth memory and will require a larger substrate measuring 100×100 mm (10,000 mm^2). The company expects that solutions built on this generation of packaging will deliver more than three and a half times the compute performance of current designs. While this solution may be enough for Nvidia's Rubin GPUs with 12 HBM4 stacks, processors that will offer more compute horsepower will require even more silicon. Looking further ahead, TSMC intends to scale this packaging approach even more aggressively. The company plans to offer interposers with an area of up to 7,885 mm^2, approximately 9.5 times the photomask limit, mounted on a 120×150 mm substrate (for context, a standard CD jewel case measures approximately 142×125 mm). This represents an increase from an 8x-reticle-sized multi-chiplet assembly on a 120×120mm substrate that TSMC presented last year, and this increase likely reflects the requests from the foundry's customers. Such a package is expected to support four 3D stacked systems-on-integrated chips (SoICs, e.g., an N2/A16 die stacked on top of an N3 logic die), twelve HBM4 memory stacks, and additional input/output dies (I/O Die). However, TSMC has customers who demand extreme performance and are willing to pay for it. For them, TSMC offers its System-on-Wafer (SoW-X) technology, which enables wafer-level integration. For now, only Cerebras and Tesla use wafer-level integration for their WFE and Dojo processors for AI, but TSMC believes there will be customers beyond these two companies with similar requirements. Without a doubt, 9.5-reticle-sized or wafer-sized processors are hard to build and assemble. But these multi-chiplet solutions require high-current kilowatt-level power delivery, and this is getting harder for server makers and chip developers, so it needs to be addressed at the system level. At its 2025 Technology Symposium, TSMC outlined a power delivery strategy designed to enable efficient and scalable power delivery at kilowatt-class levels. To address processors with kilowatt-class power requirements, TSMC wants to integrate monolithic power management ICs (PMICs) with TSVs made on TSMC's N16 FinFET technology and on-wafer inductors directly into CoWoS-L packages with RDL interposers, enabling power routing through the substrate itself. This reduces distance between power sources and active dies, lowering parasitic resistance and improving system-wide power integrity. TSMC claims that its N16-based PMIC can easily handle fine-grained voltage control for dynamic voltage scaling (DVS) at the required current levels, achieving up to five times higher power delivery density compared to conventional approaches. In addition, embedded deep trench capacitors (eDTC/DTC), built directly into the interposer or silicon substrate, provide high-density decoupling (up to 2,500 nF/mm^2) to improve power stability by filtering voltage fluctuations close to the die and ensure reliable operation even under rapid workload changes. This embedded approach enables effective DVS and improved transient response, both of which are critical for managing power efficiency in complex, multi-core, or multi-die designs. In general, TSMC's power delivery approach reflects a shift toward system-level co-optimization, where power delivery to silicon is treated as an integral part of the silicon, packaging, and system design, not a separate feature of each component. The move to much larger interposer sizes will have consequences for system design, particularly in terms of packaging form factors. The planned 100×100 mm substrate is close to the physical limits of the OAM 2.0 form factor, which measures 102×165 mm. The subsequent 120×150 mm substrate will exceed these dimensions, likely requiring new standards for module packaging and board layout to accommodate the increased size. Beyond physical constraints and power consumption, these huge multi-chiplet SiPs generate an enormous amount of heat. To address this, hardware manufacturers are already exploring advanced cooling methods, including direct liquid cooling (a technology already adopted by Nvidia for its GB200/GB300 NVL72 designs) and immersion cooling technologies, to handle the thermal loads associated with multi-kilowatt processors. However, TSMC can't address that problem on the chip or SiP level — at least for now.


Globe and Mail
09-04-2025
- Business
- Globe and Mail
Is Advanced Micro Devices Stock a Buy as Shares Plunge Below $80?
Shares of Advanced Micro Devices (AMD) have taken a hit and dipped below $80 this week, shedding nearly 20% over the past five sessions as market sentiment soured on escalating trade tensions. With AMD's considerable exposure to Chinese markets and reliance on third-party manufacturing, mainly through Taiwan Semiconductor (TSM), the market remains concerned about the potential impact of tariffs on the company. Tariffs Threaten Semiconductor Supply Chains At the center of this decline is a policy shift from President Donald Trump's administration. Last week, Trump introduced reciprocal tariffs, targeting a long list of U.S. trading partners. While semiconductors are exempted, the relief may only be temporary. Adding to the pressure, tariffs were levied on wafer fabrication equipment imported into the U.S. While AMD relies on Taiwan Semiconductor for manufacturing, the ripple effect of these tariffs could still impact its bottom line. AMD is at a disadvantage compared to its larger rival, Nvidia (NVDA). Unlike Nvidia, which reported an adjusted gross margin of 75.5% in fiscal 2025, AMD's margin stood at 53%. This significant margin gap provides a thinner cushion to AMD and limits its ability to absorb cost increases without eating into profits. AMD's China Exposure Presents a Major Risk Further, China's role in AMD's business is important. AMD reported revenue of $6.23 billion from China (including Hong Kong) in 2024, about 24.2% of its total sales. Thus, a retaliatory tariff from China on American technology products would make AMD's chips more expensive for Chinese buyers, potentially hampering its competitiveness in one of its most important markets. Wall Street has begun recalibrating expectations. KeyBanc Capital Markets analyst John Vinh recently downgraded AMD from 'Buy' to 'Hold,' citing uncertainty around its artificial intelligence (AI) business in China and concerns over margin sustainability. This has deepened investor skepticism, especially with AMD lagging behind Nvidia in several key performance metrics. AMD's Growth Drivers Still Intact Yet, it's not all doom and gloom. AMD's core business remains strong, and the company continues to show impressive growth in areas critical to its long-term strategy, particularly in data centers and AI. In 2024, AMD reported $12.6 billion in revenue from its data center segment, marking a 94% year-over-year increase. This was primarily driven by surging demand for its Instinct MI300X GPUs. The company is also moving aggressively to cement its place in the AI ecosystem. Its acquisition of Silo AI marks a strategic move to enhance its in-house AI development capabilities. With this acquisition, AMD aims to deliver high-performance AI models optimized for its hardware architecture. On the product side, AMD is rolling out an ambitious lineup. The MI325X GPUs are ramping production, and the company is already laying the groundwork for its next-generation MI350 series. These GPUs are expected to deliver significant performance and energy efficiency. Looking further, AMD has its eyes set on the MI400 series. It's also investing heavily in ROCm, its open software platform. AMD Stock's Valuation While analysts have a 'Moderate Buy' consensus rating on AMD stock, the recent price drop may represent a value opportunity for long-term investors. With a forward price-to-earnings ratio of 21.6x, AMD appears attractively priced, especially given its double-digit earnings growth potential. For instance, analysts expect AMD to post earnings per share (EPS) of $3.87 in 2025, up 47.7% year-over-year. AMD's bottom line is projected to increase 36.2% in FY26. However, AMD's valuation becomes less compelling when compared to Nvidia, which trades at a slightly higher 23.5x P/E but commands superior margins, has a leading market share in the AI space, and is well-placed to navigate macroeconomic shocks. Conclusion: Bargain or Value Trap? Ultimately, while AMD's stock has become cheaper and its long-term potential remains intact, its short-term outlook remains clouded by geopolitical risks and competitive pressures. Thus, investors looking for a stock in the semiconductor space may find Nvidia to be a more appealing option.
Yahoo
14-03-2025
- Business
- Yahoo
The Case Against Buying AMD's Stock Dip
The market continues to punish Advanced Micro Devices (AMD) stock in the first quarter of 2025, even though the company's business is improving, particularly in revenue growth and operating margins. The problem is that these improvements aren't as strong as the market had hoped, mainly due to the lack of room for runners-up in the data center space. Analysts have consistently lowered their bottom-line estimates for AMD for the next three years, while Nvidia, its main rival, continues to see its estimates rise. Easily identify stocks' risks and opportunities. Discover stocks' market position with detailed competitor analyses. I'm neutral on AMD stock right now as the semiconductor supplier struggles to convince investors that it's a better investment than Nvidia—which, despite also being down in 2025, trades at similar valuation metrics when adjusted for growth. NVDA's data center business is growing faster, has higher margins, and is still far ahead regarding market share. So, while the bulls argue this is a good time to buy AMD stock on the dip, citing long-term value up ahead, bearish sentiment and downward revisions are continuing unabated. As things stand, I don't think there will be a reversal for AMD stock in the short or mid-term. First, it's important to note that AMD's data centers are the key area investors are closely watching. In AMD's most recent quarter, Q4, the company reported a 24% year-over-year growth in revenue, with data centers accounting for 52% of that growth. The data center segment itself saw a 69% year-over-year increase. More importantly, the management team has expressed even greater optimism about its AI prospects. Significant highlights were the ramp-up of the Instinct MI300X data center GPU and its EPYC server CPUs. Obviously, 69% yearly growth is far from bad, but it's clear that it does not meet investors' high expectations. As Susquehanna's Christopher Rolland points out, much of the selloff in AMD stock over the last year can be attributed to more tempered expectations for the MI300 series. A year ago, incremental MI300 sales were projected to hit $11 billion to $12 billion by 2025. Still, analysts are now revising those expectations to about half that — essentially a 100% drop in expectations for what was supposed to be AMD's most significant growth driver. Additionally, it doesn't help that other parts of AMD's business, like the PC and gaming segments, are still struggling. Potential headwinds, such as tariffs on China, Canada, and Mexico, could weigh on global PC sales. Plus, the Sony PlayStation 5, which uses a custom AMD-designed CPU and GPU, is now a few years old, and discussions about the next-generation console are likely just around the corner. One sign that AMD's selloff doesn't reflect worsening fundamentals is the company's margins. AMD's margins have been trending upward over the last decade despite experiencing a significant drop in the past three years due to slowing sales in personal computers and gaming products. But now, after several consecutive quarters of recovery, margins are rising again. In Q4, the company's operating margins stood at 11%, an increase of 5 percentage points compared to last year. I wouldn't be surprised if operating margins surpass 20% on a trailing 12-month basis by the end of this year or possibly by mid-next year. The company continues expanding its data center business and launching new products in 2025, like the upgraded AMD Instinct MI350 GPUs for data centers. The launch, which was initially planned for the second half of 2025, has been accelerated due to 'strong customer demand,' the management pointed out. On top of that, AMD recently acquired server maker ZT Systems. These moves should help drive more revenue growth with higher profit margins in the near term. One of the main concerns hanging over AMD is that the AI GPU market simply doesn't leave much room for runners-up. With both companies trading at similar valuations, why buy AMD at a forward P/E of 15x and a PEG of 0.5x when investors can buy Nvidia stock at a forward P/E of 19x and a PEG of 0.7x? Nvidia remains several steps ahead of AMD, and savvy investors know it. For example, in Q4, Nvidia's revenue jumped by nearly 80%, almost entirely driven by its data center business, which grew by 93% in the quarter. These numbers are much stronger than AMD's, considering Nvidia is the clear market leader by a considerable margin. Of course, AMD is actively closing the performance gap and seeking out greater market share with its next-generation GPU, which the company's management team has called the biggest generational leap in AI. However, in the past three months, analysts have revised AMD's EPS consensus downward by 7.8%, 10.1%, and 15% over the next three years. In contrast, Nvidia's EPS consensus for the same period has been revised upward by 1.5%, 2.7%, and 5%. This highlights how bearish the market sentiment remains for AMD, especially regarding its growth prospects in data centers. With the valuation gap not being all that large when adjusted for growth, the market seems to favor Nvidia for the time being. On Wall Street, the consensus on AMD stock is generally bullish, though with signs of moderation. Out of the 37 analysts covering the stock, 25 are bullish, 11 are neutral, and just one is bearish. AMD's average price target is $147.88 per share, suggesting a potential upside of 53% from the current price. AMD's fundamentals remain strong, particularly in its data center segment, which has been growing rapidly and contributing more significantly to revenue with the rollout of its MI300X GPUs. However, sales expectations for the MI300 series have fallen short of market hopes, especially given AMD's valuation, which isn't far from its main competitor, Nvidia—the dominant force in the GPU space. While I have no doubt that AMD will continue to thrive in the long run, I believe the valuation gap with Nvidia needs to widen further. In other words, AMD's stock would need to become significantly cheaper to be truly attractive. Disclosure Disclaimer Questions or Comments about the article? Write to editor@