logo
#

Latest news with #ComputeExpressLink

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure, Including a Supercluster Architecture That Integrates NVLink, UALink, and HBM via CXL
Panmnesia Introduces Today's and Tomorrow's AI Infrastructure, Including a Supercluster Architecture That Integrates NVLink, UALink, and HBM via CXL

Business Wire

time18-07-2025

  • Business
  • Business Wire

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure, Including a Supercluster Architecture That Integrates NVLink, UALink, and HBM via CXL

DAEJEON, South Korea--(BUSINESS WIRE)--Panmnesia has released a technical report titled 'Compute Can't Handle the Truth: Why Communication Tax Prioritizes Memory and Interconnects in Modern AI Infrastructure.' In this report, Panmnesia outlines the trends in modern AI models, the limitations of current AI infrastructure in handling them, and how emerging memory and interconnect technologies—including Compute Express Link (CXL), NVLink, Ultra Accelerator Link (UALink), and High Bandwidth Memory (HBM)—can be leveraged to improve AI infrastructure. Panmnesia aims to address the current challenges in AI infrastructure, by building flexible, scalable, and communication-efficient architecture using diverse interconnect technologies, instead of fixed GPU-based configurations. Panmnesia's CEO, Dr. Myoungsoo Jung, explained, 'This technical report was written to more clearly and accessibly share the ideas on AI infrastructure that we presented during a keynote last August. We aimed to explain AI and large language models (LLMs) in a way that even readers without deep technical backgrounds could understand. We also explored how AI infrastructure may evolve in the future, considering the unique characteristics of AI services.' He added, 'We hope this report proves helpful to those interested in the field.' Overview of the Technical Report Panmnesia's technical report is divided into three main parts: Trends in AI and Modern Data Center Architectures for AI Workloads CXL Composable Architectures: Improving Data Center Architecture using CXL and Acceleration Case Studies Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 1. Trends in AI and Modern Data Center Architectures for AI Workloads1 AI applications based on sequence models—such as chatbots, image generation, and video processing—are now widely integrated into everyday life. This technical report begins with an overview of sequence models, their underlying mechanisms, and the evolution from recurrent neural networks (RNNs) to large language models (LLMs). It then explains how current AI infrastructures handle these models and discusses their limitations. In particular, Panmnesia identifies two major challenges in modern AI infrastructures: (1) communication overhead during synchronization and (2) low resource utilization resulting from rigid, GPU-centric architectures. 2. CXL Composable Architectures: Improving Data Center Architecture Using CXL and Acceleration Case Studies2 To address the aforementioned challenges, Panmnesia proposes a solution built on CXL, an emerging interconnect technology. The report offers a thorough explanation of CXL's core concepts and features, emphasizing how it can minimize unnecessary communication through automatic cache coherence management and enables flexible resource expansion—ultimately addressing key challenges of conventional AI infrastructure. Panmnesia also introduces its CXL 3.0-compliant real-system prototype developed using its core technologies, including CXL IPs and CXL Switches. The report then shows how this prototype has been applied to accelerate real-world AI applications—such as RAG and deep learning recommendation models (DLRM)—demonstrating the practicality and effectiveness of CXL-based infrastructure. 3. Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster)3 This technical report is not limited to CXL alone. Panmnesia goes further by proposing methods to build more advanced AI infrastructure through the integration of diverse interconnect technologies alongside CXL. At the core of this approach is the CXL-over-XLink supercluster architecture, which uses CXL to enhance scalability, compatibility, and communication efficiency across clusters connected via accelerator-centric interconnects—collectively referred to as XLink—including UALink, NVLink, and NVLink Fusion. The report explains how the integration of these interconnect technologies enables an architecture that combines the advantages of each. It then concludes with a discussion on the practical application of emerging technologies such as HBM and silicon photonics. Conclusion With the release of this technical report, Panmnesia reinforces its leadership in next-generation interconnect technologies such as CXL and UALink. In parallel, the company continues to actively participate in various consortia related to AI infrastructure, including the CXL Consortium, UALink Consortium, PCI-SIG, and the Open Compute Project. Recently, Panmnesia also unveiled its 'link solution' product lineup, designed to realize its vision for next-generation AI infrastructure and further strengthen its brand identity. Dr. Myoungsoo Jung, CEO of Panmnesia, stated, 'We will continue to lead efforts to build better AI infrastructure by developing diverse link solutions and sharing our insights openly.' The full technical report on AI infrastructure is available on Panmnesia's website: 1 This corresponds to Sections 2 and 3 of the technical report. 2 This corresponds to Sections 4 and 5 of the technical report. 3 This corresponds to Section 6 of the technical report. Expand

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure,
Panmnesia Introduces Today's and Tomorrow's AI Infrastructure,

Business Wire

time18-07-2025

  • Business
  • Business Wire

Panmnesia Introduces Today's and Tomorrow's AI Infrastructure,

BUSINESS WIRE)--Panmnesia has released a technical report titled 'Compute Can't Handle the Truth: Why Communication Tax Prioritizes Memory and Interconnects in Modern AI Infrastructure.' In this report, Panmnesia outlines the trends in modern AI models, the limitations of current AI infrastructure in handling them, and how emerging memory and interconnect technologies—including Compute Express Link (CXL), NVLink, Ultra Accelerator Link (UALink), and High Bandwidth Memory (HBM)—can be leveraged to improve AI infrastructure. Panmnesia aims to address the current challenges in AI infrastructure, by building flexible, scalable, and communication-efficient architecture using diverse interconnect technologies, instead of fixed GPU-based configurations. Panmnesia's CEO, Dr. Myoungsoo Jung, explained, 'This technical report was written to more clearly and accessibly share the ideas on AI infrastructure that we presented during a keynote last August. We aimed to explain AI and large language models (LLMs) in a way that even readers without deep technical backgrounds could understand. We also explored how AI infrastructure may evolve in the future, considering the unique characteristics of AI services.' He added, 'We hope this report proves helpful to those interested in the field.' Overview of the Technical Report Panmnesia's technical report is divided into three main parts: Trends in AI and Modern Data Center Architectures for AI Workloads CXL Composable Architectures: Improving Data Center Architecture using CXL and Acceleration Case Studies Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 1. Trends in AI and Modern Data Center Architectures for AI Workloads 1 AI applications based on sequence models—such as chatbots, image generation, and video processing—are now widely integrated into everyday life. This technical report begins with an overview of sequence models, their underlying mechanisms, and the evolution from recurrent neural networks (RNNs) to large language models (LLMs). It then explains how current AI infrastructures handle these models and discusses their limitations. In particular, Panmnesia identifies two major challenges in modern AI infrastructures: (1) communication overhead during synchronization and (2) low resource utilization resulting from rigid, GPU-centric architectures. 2. CXL Composable Architectures: Improving Data Center Architecture Using CXL and Acceleration Case Studies 2 To address the aforementioned challenges, Panmnesia proposes a solution built on CXL, an emerging interconnect technology. The report offers a thorough explanation of CXL's core concepts and features, emphasizing how it can minimize unnecessary communication through automatic cache coherence management and enables flexible resource expansion—ultimately addressing key challenges of conventional AI infrastructure. Panmnesia also introduces its CXL 3.0-compliant real-system prototype developed using its core technologies, including CXL IPs and CXL Switches. The report then shows how this prototype has been applied to accelerate real-world AI applications—such as RAG and deep learning recommendation models (DLRM)—demonstrating the practicality and effectiveness of CXL-based infrastructure. 3. Beyond CXL: Optimizing AI Resource Connectivity in Data Center via Hybrid Link Architectures (CXL-over-XLink Supercluster) 3 This technical report is not limited to CXL alone. Panmnesia goes further by proposing methods to build more advanced AI infrastructure through the integration of diverse interconnect technologies alongside CXL. At the core of this approach is the CXL-over-XLink supercluster architecture, which uses CXL to enhance scalability, compatibility, and communication efficiency across clusters connected via accelerator-centric interconnects—collectively referred to as XLink—including UALink, NVLink, and NVLink Fusion. The report explains how the integration of these interconnect technologies enables an architecture that combines the advantages of each. It then concludes with a discussion on the practical application of emerging technologies such as HBM and silicon photonics. Conclusion With the release of this technical report, Panmnesia reinforces its leadership in next-generation interconnect technologies such as CXL and UALink. In parallel, the company continues to actively participate in various consortia related to AI infrastructure, including the CXL Consortium, UALink Consortium, PCI-SIG, and the Open Compute Project. Recently, Panmnesia also unveiled its 'link solution' product lineup, designed to realize its vision for next-generation AI infrastructure and further strengthen its brand identity. Dr. Myoungsoo Jung, CEO of Panmnesia, stated, 'We will continue to lead efforts to build better AI infrastructure by developing diverse link solutions and sharing our insights openly.' The full technical report on AI infrastructure is available on Panmnesia's website:

Primemas Announces Customer Samples Milestone of World's First CXL 3.0 SoC
Primemas Announces Customer Samples Milestone of World's First CXL 3.0 SoC

Yahoo

time24-06-2025

  • Business
  • Yahoo

Primemas Announces Customer Samples Milestone of World's First CXL 3.0 SoC

Working with Micron and their CXL AVL program to accelerate commercialization of next-generation memory solutions for data centers and AI infrastructure SANTA CLARA, Calif., and SEOUL, South Korea, June 24, 2025--(BUSINESS WIRE)--Primemas Inc., a fabless semiconductor company specializing in chiplet-based SoC solutions through its Hublet® architecture, today announced the availability of customer samples of the world's first Compute Express Link (CXL) memory 3.0 controller. Primemas has been delivering engineering samples and development boards to select strategic customers and partners, who have played a key role in validating the performance and capabilities of Hublet® compared to alternative CXL controllers. Building on this successful early engagement, Primemas is now pleased to announce that Hublet® product samples are ready for shipment to memory vendors, customers, and ecosystem partners. While conventional CXL memory expansion controllers are limited by fixed form factors and capped DRAM capacities, Primemas leverages cutting-edge chiplet technology to deliver unmatched scalability and modularity. At the core of this innovation is the Hublet®—a versatile building block that enables a wide variety of configurations. Primemas customers are finding innovative ways to leverage the modularity: A 1x1 single Hublet® delivers compact E3.S products supporting up to 512GB of DRAM; A 2x2 Hublet® can support a PCIe Add-in-card or CEM products with up to 2TB of DRAM, and For hyperscale environments, a 4x4 Hublet® powers a 1U rack memory appliance capable of an impressive 8TB of DRAM. "We are very encouraged by the excellent feedback from our initial partners, who leveraged Hublet® to address the challenges posed by rapidly growing workloads," said Jay Kim, EVP and Head of Business Development at Primemas. "We're excited to take the next major step toward commercialization through our collaboration with Micron and their CXL AVL program." The CXL ASIC Validation Lab (AVL) program was established by Micron to help bring next-generation CXL controllers to market and achieve maximum reliability and compatibility with its advanced DRAM modules. There are numerous challenges to delivering stable, reliable memory read and write operations while optimizing performance and power efficiency in CXL controllers. Through this joint effort, the two companies aim to provide a high-quality, reliable and the world's first CXL 3.0 controller along with the latest high-capacity 128GB RDIMM modules. "With the rapid adoption of AI, and the corresponding increase in memory-intensive workloads, CXL-based solutions are driving innovations to transform traditional compute platforms," said Luis Ancajas, director of CXL Business Development at Micron. "As an industry leader in data center memory solutions, we are excited to collaborate with innovators like Primemas to validate and accelerate next-generation solutions like the Hublet® SoC through our AVL program and help bring these transformative solutions to market to unlock new levels of performance, scalability and efficiency for the data center." This joint effort demonstrates the shared commitment of Primemas and Micron to innovation and quality in the semiconductor industry and further strengthens Primemas' position as a leader in scalable, high-performance chiplet-based SoC solutions for CXL, AI, and data analytics applications. About Primemas Primemas is a fabless semiconductor company delivering pre-built SoC hub chiplets (Hublet®) to streamline development and manufacturing—reducing the cost and time associated with custom design and production. The Hublet® platform provides scalable I/O, control, and compute functionality, supporting markets such as CXL, AI, and data analytics. Primemas is headquartered in Santa Clara, California, with an R&D center in Seoul, South Korea. To learn more about Primemas, visit View source version on Contacts Press Contact: press@ Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Primemas Announces Customer Samples Milestone of World's First CXL 3.0 SoC
Primemas Announces Customer Samples Milestone of World's First CXL 3.0 SoC

Business Wire

time24-06-2025

  • Business
  • Business Wire

Primemas Announces Customer Samples Milestone of World's First CXL 3.0 SoC

SANTA CLARA, Calif., and SEOUL, South Korea--(BUSINESS WIRE)-- Primemas Inc., a fabless semiconductor company specializing in chiplet-based SoC solutions through its Hublet® architecture, today announced the availability of customer samples of the world's first Compute Express Link (CXL) memory 3.0 controller. Primemas and Micron Collaborate to Accelerate CXL Commercialization for AI and Data Centers Share Primemas has been delivering engineering samples and development boards to select strategic customers and partners, who have played a key role in validating the performance and capabilities of Hublet® compared to alternative CXL controllers. Building on this successful early engagement, Primemas is now pleased to announce that Hublet® product samples are ready for shipment to memory vendors, customers, and ecosystem partners. While conventional CXL memory expansion controllers are limited by fixed form factors and capped DRAM capacities, Primemas leverages cutting-edge chiplet technology to deliver unmatched scalability and modularity. At the core of this innovation is the Hublet®—a versatile building block that enables a wide variety of configurations. Primemas customers are finding innovative ways to leverage the modularity: A 1x1 single Hublet® delivers compact E3.S products supporting up to 512GB of DRAM; A 2x2 Hublet® can support a PCIe Add-in-card or CEM products with up to 2TB of DRAM, and For hyperscale environments, a 4x4 Hublet® powers a 1U rack memory appliance capable of an impressive 8TB of DRAM. ' We are very encouraged by the excellent feedback from our initial partners, who leveraged Hublet® to address the challenges posed by rapidly growing workloads,' said Jay Kim, EVP and Head of Business Development at Primemas. 'We're excited to take the next major step toward commercialization through our collaboration with Micron and their CXL AVL program.' The CXL ASIC Validation Lab (AVL) program was established by Micron to help bring next-generation CXL controllers to market and achieve maximum reliability and compatibility with its advanced DRAM modules. There are numerous challenges to delivering stable, reliable memory read and write operations while optimizing performance and power efficiency in CXL controllers. Through this joint effort, the two companies aim to provide a high-quality, reliable and the world's first CXL 3.0 controller along with the latest high-capacity 128GB RDIMM modules. 'With the rapid adoption of AI, and the corresponding increase in memory-intensive workloads, CXL-based solutions are driving innovations to transform traditional compute platforms,' said Luis Ancajas, director of CXL Business Development at Micron. 'As an industry leader in data center memory solutions, we are excited to collaborate with innovators like Primemas to validate and accelerate next-generation solutions like the Hublet® SoC through our AVL program and help bring these transformative solutions to market to unlock new levels of performance, scalability and efficiency for the data center.' This joint effort demonstrates the shared commitment of Primemas and Micron to innovation and quality in the semiconductor industry and further strengthens Primemas' position as a leader in scalable, high-performance chiplet-based SoC solutions for CXL, AI, and data analytics applications. About Primemas Primemas is a fabless semiconductor company delivering pre-built SoC hub chiplets (Hublet®) to streamline development and manufacturing—reducing the cost and time associated with custom design and production. The Hublet® platform provides scalable I/O, control, and compute functionality, supporting markets such as CXL, AI, and data analytics. Primemas is headquartered in Santa Clara, California, with an R&D center in Seoul, South Korea. To learn more about Primemas, visit

MRVL to Post Q1 Earnings: Time to Buy, Sell or Hold the Stock?
MRVL to Post Q1 Earnings: Time to Buy, Sell or Hold the Stock?

Yahoo

time26-05-2025

  • Business
  • Yahoo

MRVL to Post Q1 Earnings: Time to Buy, Sell or Hold the Stock?

Marvell Technology, Inc. MRVL is scheduled to report first-quarter fiscal 2026 results after market close on May 29, 2025. Marvell Technology anticipates revenues of $1.875 billion (+/- 5%) for first-quarter fiscal 2026. The Zacks Consensus Estimate for MRVL's fiscal first-quarter revenues is pegged at $1.88 billion, indicating year-over-year growth of 61.6%. For the fiscal first quarter, the company expects non-GAAP earnings of 61 cents per share (+/- 5 cents per share). The Zacks Consensus Estimate for MRVL's fiscal first-quarter earnings is pegged at 61 cents per share, reflecting a 154.2% increase year over year. The consensus mark for earnings has remained unchanged over the past 60 days. (Find the latest EPS estimates and surprises on Zacks Earnings Calendar.) Image Source: Zacks Investment Research In the trailing four quarters, Marvell Technology's earnings surpassed the Zacks Consensus Estimate in each of the trailing four quarters, with an average surprise of 4.25%. Marvell Technology, Inc. price-eps-surprise | Marvell Technology, Inc. Quote Our proven model does not conclusively predict an earnings beat for Marvell Technology this time. The combination of a positive Earnings ESP and a Zacks Rank #1 (Strong Buy), 2 (Buy) or 3 (Hold) increases the odds of an earnings beat, which is not the case here. Though Marvell Technology currently carries a Zacks Rank #3, it has an Earnings ESP of 0.00%. You can uncover the best stocks to buy or sell before they are reported with our Earnings ESP Filter. You can see the complete list of today's Zacks #1 Rank stocks here. Marvell Technology's overall first-quarter revenues are likely to have benefited from improved performance across the majority of its end markets. The company's data center division continues to be the primary engine of growth, benefiting from the rising demand for electro-optics products, custom artificial intelligence (AI) silicon and next-generation switches. Our model estimates suggest that first-quarter data center revenues will reach $1.395 billion, implying a 2.1% sequential growth. The growing adoption of 800-gig PAM products and 400ZR data center interconnect solutions is fueling top-line expansion. Additionally, advancements in Compute Express Link technology and increased AI-related investments position Marvell as a key player in the high-performance computing ecosystem. Improved inventory corrections and recovering demand are helping Marvell's Networking and Carrier segments rebound. Our projections indicate that Enterprise Networking and Carrier revenues will each rise 8.1% and 9% sequentially, reaching $114.4 million and $186.9 million, respectively. Marvell Technology's carrier segment is benefiting from new design wins in cloud-driven networking solutions. As telecom providers upgrade their infrastructure for AI-driven applications, MRVL's networking division should continue to see steady improvements. The Automotive and Industrial divisions have been a consistent revenue contributor for Marvell Technology, thanks to the increasing semiconductor content in vehicles and industrial automation growth. For the first quarter, our model estimates for Automotive/Industrial revenues are pegged at $88.9 million, indicating a 3.7% sequential improvement. With automakers ramping up production of connected and electric vehicles, Marvell Technology's automotive ethernet solutions and advanced driver-assistance system technologies should continue to see steady adoption. Despite the strength of Marvell Technology's data center, networking and AI segments, its consumer end market remains a weak spot. Seasonality in gaming and broader macroeconomic uncertainty might have resulted in weak revenues in this segment. In the past year, MRVL shares have plunged 20.9%, underperforming the Zacks Electronics – Semiconductors industry's growth of 14.1%. Image Source: Zacks Investment Research Now, let's look at the value Marvell Technology offers investors at the current levels. MRVL stock trades at a discounted price with a forward 12-month price-to-sales (P/S) multiple of 5.99X compared with the industry's 7.54X. Image Source: Zacks Investment Research Marvell Technology's custom silicon business is a game-changer, particularly in the booming data center market. Cloud service providers rely on their highly specialized chips to optimize AI computing efficiency, networking speed and energy consumption. Furthermore, Marvell Technology has also formed strong collaborations with industry leaders, including NVIDIA NVDA, Juniper Networks JNPR and Coherent Corp. COHR, to design high-speed networking technology for AI workloads. Marvell Technology and NVIDIA have collaborated to integrate MRVL's optical interconnect solutions with NVIDIA's AI and computing technology. Using the NVIDIA HGX H100 eight-GPU platform, BlueField-3 DPUs, Spectrum-X networking, and Marvell's interconnects, they have developed NVIDIA Israel-1 to power AI applications with high efficiency. Marvell Technology has collaborated with Juniper Networks and Coherent Corp. to develop 800ZR networking solutions. Together, these companies combined Juniper's PTX10002-36QDD Packet Transport Router, Coherent's 800ZR transceiver, and MRVL's Orion 800G coherent DSP to develop a networking solution to support AI, cloud, and 5G. However, the U.S. government's recent steps toward China have also been a matter of concern for Marvell Technology as the company generates significant revenues (about 43% of its fiscal 2025 total revenues) from the Chinese market. As Marvell Technology owns research and development facilities in China and outsources to China, the growing geopolitical tension, fear of fresh sanctions and persistent tariff threats have added to investors' skepticism. However, given the MRVL's strong fundamentals, investors' concerns seem overblown. The recent U.S.-China agreement to temporarily reduce tariffs on each other's goods can provide relief to Marvell Technology's business for the near term. Marvell Technology's upcoming quarterly results are likely to demonstrate the beginning of a multi-year growth story fueled by AI innovation. However, the company also suffers from the U.S.-China trade war as it highly depends on both nations. all these factors, we suggest that investors should retain MRVL stock at present. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Juniper Networks, Inc. (JNPR) : Free Stock Analysis Report NVIDIA Corporation (NVDA) : Free Stock Analysis Report Marvell Technology, Inc. (MRVL) : Free Stock Analysis Report Coherent Corp. (COHR) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store