logo
AMD Unveils Vision For Open AI Ecosystem

AMD Unveils Vision For Open AI Ecosystem

AMD delivered its comprehensive, end-to-end integrated AI platform vision and introduced its open, scalable rack-scale AI infrastructure built on industry standards at its 2025 Advancing AI event.
Dr. Lisa Su, chairman and CEO of AMD, emphasized the company's role in accelerating AI innovation. 'We are entering the next phase of AI, driven by open standards, shared innovation and AMD's expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI,' Su said.
AMD announced a broad portfolio of hardware, software and solutions to power the full spectrum of AI: AMD unveiled the Instinct MI350 Series GPUs , setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions.
, setting a new benchmark for performance, efficiency and scalability in generative AI and high-performance computing. The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries. MI355X also delivers significant price-performance gains, generating up to 40% more tokens-per-dollar compared to competing solutions. AMD demonstrated end-to-end, open-standards rack-scale AI infrastructure —already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025.
—already rolling out with AMD Instinct MI350 Series accelerators, 5th Gen AMD EPYC processors and AMD Pensando Pollara NICs in hyperscaler deployments such as Oracle Cloud Infrastructure (OCI) and set for broad availability in 2H 2025. AMD also previewed its next generation AI rack called 'Helios .' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs.
.' It will be built on the next-generation AMD Instinct MI400 Series GPUs – which compared to the previous generation are expected to deliver up to 10x more performance running inference on Mixture of Experts models, the 'Zen 6'-based AMD EPYC 'Venice' CPUs and AMD Pensando 'Vulcano' NICs. The latest version of the AMD open-source AI software stack, ROCm 7 , is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment.
, is engineered to meet the growing demands of generative AI and high-performance computing workloads—while dramatically improving developer experience across the board. ROCm 7 features improved support for industry-standard frameworks, expanded hardware compatibility and new development tools, drivers, APIs and libraries to accelerate AI development and deployment. The Instinct MI350 Series exceeded AMD's five-year goal to improve the energy efficiency of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity.
of AI training and high-performance computing nodes by 30x, ultimately delivering a 38x improvement. AMD also unveiled a new 2030 goal to deliver a 20x increase in rack-scale energy efficiency from a 2024 base year, enabling a typical AI model that today requires more than 275 racks to be trained in fewer than one fully utilized rack by 2030, using 95% less electricity. AMD also announced the broad availability of the AMD Developer Cloud for the global developer and open-source communities. Purpose-built for rapid, high-performance AI development, users will have access to a fully managed cloud environment with the tools and flexibility to get started with AI projects – and grow without limits. With ROCm 7 and the AMD Developer Cloud, AMD is lowering barriers and expanding access to next-gen compute. Strategic collaborations with leaders like Hugging Face, OpenAI and Grok are proving the power of co-developed, open solutions.
Broad Partner Ecosystem Showcases AI Progress Powered by AMD
Today, seven of the 10 largest model builders and Al companies are running production workloads on Instinct accelerators. Among those companies are Meta, OpenAI, Microsoft and xAI, who joined AMD and other partners at Advancing AI, to discuss how they are working with AMD for AI solutions to train today's leading AI models, power inference at scale and accelerate AI exploration and development: Meta detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform.
detailed how Instinct MI300X is broadly deployed for Llama 3 and Llama 4 inference. Meta shared excitement for MI350 and its compute power, performance-per-TCO and next-generation memory. Meta continues to collaborate closely with AMD on AI roadmaps, including plans for the Instinct MI400 Series platform. OpenAI CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms.
CEO Sam Altman discussed the importance of holistically optimized hardware, software and algorithms and OpenAI's close partnership with AMD on AI infrastructure, with research and GPT models on Azure in production on MI300X, as well as deep design engagements on MI400 Series platforms. Oracle Cloud Infrastructure (OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale.
(OCI) is among the first industry leaders to adopt the AMD open rack-scale AI infrastructure with AMD Instinct MI355X GPUs. OCI leverages AMD CPUs and GPUs to deliver balanced, scalable performance for AI clusters, and announced it will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train and inference AI at scale. HUMAIN discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide.
discussed its landmark agreement with AMD to build open, scalable, resilient and cost-efficient AI infrastructure leveraging the full spectrum of computing platforms only AMD can provide. Microsoft announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure.
announced Instinct MI300X is now powering both proprietary and open-source models in production on Azure. Cohere shared that its high-performance, scalable Command models are deployed on Instinct MI300X, powering enterprise-grade LLM inference with high throughput, efficiency and data privacy.
Red Hat d escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments.
escribed how its expanded collaboration with AMD enables production-ready AI environments, with AMD Instinct GPUs on Red Hat OpenShift AI delivering powerful, efficient AI processing across hybrid cloud environments. Astera Labs highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure.
highlighted how the open UALink ecosystem accelerates innovation and delivers greater value to customers and shared plans to offer a comprehensive portfolio of UALink products to support next-generation AI infrastructure. Marvell joined AMD to highlight its collaboration as part of the UALink Consortium developing an open interconnect, bringing the ultimate flexibility for AI infrastructure.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

META's Tech Growth Surges as AI PCs Lead Market Shift
META's Tech Growth Surges as AI PCs Lead Market Shift

TECHx

time4 days ago

  • TECHx

META's Tech Growth Surges as AI PCs Lead Market Shift

Home » Tech Market Insights » META's Tech Growth Surges as AI PCs Lead Market Shift The Middle East, Türkiye, and Africa (META) region is seeing rapid growth in its IT markets. A major driver of this progress is the rising adoption of AI PCs. CONTEXT's latest META Monthly Webinar confirms that the UAE and Saudi Arabia are leading in IT distribution sales. However, this leadership is not uniform across the region. South Africa is showing lower shares compared to the UAE and Saudi Arabia. Türkiye, on the other hand, posted exceptional growth in April, likely due to specific deals. A key highlight across all markets is the growing demand for AI-capable notebooks. This trend points to a new computing era in META. UAE: A Digital Powerhouse in AI PCs and Cloud The United Arab Emirates continues to strengthen its status as a leader in digital transformation. It has shown strong growth in IT distribution sales, both in volume and revenue, through April 2025. This performance spans across major product categories, from desktops and smartphones to servers and security software. It reflects the country's broad and deep commitment to digital progress. The successful hosting of GISEC 2025 further underscores the UAE's position in cybersecurity and technology. Adding to this momentum is the launch of GPU-as-a-Service (GPUaaS) by a UAE-based telecom and digital provider. This is a major leap in cloud services. It also strengthens the UAE's lead in AI PCs and advanced computing. The UAE's proactive approach is backed by the highest ICT spending in the META region, driving steady growth and innovation. Saudi Arabia: A Growing Tech Ecosystem with AI at Its Core Saudi Arabia is also making strong strides in technology. The country has recorded remarkable growth in notebooks and servers, two crucial categories for AI PCs. Its tech and communications sector hit $48 billion last year and is expected to exceed that in 2025. A standout development is the $10 billion partnership between AMD and HUMAIN, a new Saudi AI venture. This collaboration aims to build the world's most open and scalable AI infrastructure. It marks a major step toward embedding AI into the country's digital ecosystem. This move puts Saudi Arabia on the global map for AI leadership and is likely to accelerate the demand for AI PCs. The Rise of AI-Capable PCs: A Regional Trend A defining trend across META is the fast shift to AI PCs equipped with advanced chipsets. By April 2025, 22% of all notebooks sold in the region had an NPU (Neural Processing Unit). That's up from just 8% at the beginning of 2024. These AI PCs include Intel's Core Ultra, Apple M chips, and AMD Ryzen AI. Their growing availability is boosting market share even though end-user AI demand is still developing. The result is higher Average Selling Prices (ASP) and improved revenue for IT distributors. Another factor is the upcoming end-of-support for Windows 10 in October 2025, which is expected to trigger new PC refresh cycles across META. Market Outlook and Future Prospects for AI PCs The META region offers strong opportunities for tech growth, with AI PCs at the center of this evolution. The UAE and Saudi Arabia continue to lead through focused digital investments and rapid AI integration. As AI technologies mature and become more user-friendly, demand for AI PCs is expected to grow further. These devices are changing procurement strategies and pushing the region toward the next wave of innovation. Insights from CONTEXT's META Monthly Webinar clearly show the region is evolving fast. Strategic thinking and adaptability will be key to success. For META, the future of computing is here, and it's being shaped by AI PCs.

HPE Expands ProLiant Gen12 Server Portfolio
HPE Expands ProLiant Gen12 Server Portfolio

Channel Post MEA

time26-06-2025

  • Channel Post MEA

HPE Expands ProLiant Gen12 Server Portfolio

HPE has announced an expansion to the HPE ProLiant Compute Gen12 server portfolio, which delivers next-level security, performance and efficiency. The expanded portfolio includes two new servers powered by 5th Gen AMD EPYC processors to optimize memory-intensive workloads, and new automation features for greater visibility and control delivered through HPE Compute Ops Management. In addition, HPE ProLiant Compute servers are now available with HPE Morpheus VM Essentials Software support. HPE Morpheus VM Essentials is an open virtualization solution that helps reduce costs, minimize vendor lock-in , and simplify IT management. HPE also announced new HPE for Azure Local solutions with the HPE ProLiant DL145 Gen11 server to empower expansion of purpose-built edge capabilities across distributed environments. 'Enterprise workloads are growing and evolving, requiring next-generation technologies that maximize IT environments, securely and efficiently,' said Krista Satterthwaite, senior vice president and general manager, Compute at HPE. 'The enhanced HPE ProLiant Compute Gen12 portfolio, which now includes new servers powered by 5th Gen AMD EPYC processors, drives even greater optimized workload performance, security protection from the chip-to-the-cloud, and a boost to productivity with AI-driven management capabilities.' HPE delivers double the memory with new HPE ProLiant Gen12 servers HPE added two new servers to the HPE ProLiant Gen12 portfolio: the HPE ProLiant Compute DL325 and DL345 Gen12 servers with latest AMD EPYC processors. The new servers are optimized to handle memory-intensive workloads, such as virtualization and edge deployments, with twice as much memory – up to 6TB – compared to the previous generation.1 The new servers feature the next generation HPE Integrated Lights-Out (HPE iLO 7), HPE's industry-leading security IP that builds in protection at the silicon level to protect against threats. The latest version now safeguards the server at the factory floor during assembly to continue all the way through a server's end-of-life and includes protection against future quantum computing attacks. Simplifying management and boosting productivity with enhanced automation features through HPE Compute Ops Management HPE also announced new automated and AI-driven features for HPE Compute Ops Management, a secure, cloud-based software application to monitor and manage servers. Available on all HPE ProLiant Compute Gen12 servers, these new features offer customers a number of benefits across their environment: Enhanced insights – New tool allows customers to view HPE Active Health System files within HPE Compute Ops Management to expedite root cause analysis, avoid opening support tickets, and greatly reduce mean time to resolution Reduced complexity – Seamless integration of multi-vendor server monitoring into existing environments to help simplify operations, decrease the number of tools required for daily activities and achieve a consolidated view of the entire compute infrastructure Decreased downtime – Gain AI-driven insights with workflow policy approvals to safeguard against major disruptive operations often caused by human error. Latest feature ensures that policies are verified and necessary layers of action approval are in place to prevent downtime HPE Compute Ops Management is already helping customers simplify server management, dramatically reducing manual effort in deployment. Improvements include2: Up to 75% less time spent managing servers Up to 4.8 hours less downtime per server per year Up to $152,000 in travel and software costs saved over three years Modernizing and optimizing costs with HPE Morpheus VM Essentials Software HPE is further simplifying IT management for customers by enabling HPE Morpheus VM Essentials Software on the latest HPE ProLiant servers. With the virtualized solution, customers can achieve higher performance for virtualized workloads while saving space and energy, and reducing licensing costs by up to 90 percent.3 HPE ProLiant Compute Gen12 servers featuring AMD EPYC™ processors offer performance and power efficiency with HPE Morpheus VM Essentials. Additional details can be found here: Driving Virtualization Forward: The Power of AMD and HPE Morpheus VM Essentials Software Empowering virtualization at the edge with seamless integration through the HPE ProLiant DL145 for Azure Local Integrated System Additionally, HPE announced that the HPE ProLiant DL145 Gen11 server, engineered for the edge and featuring AMD EPYC processors, is now available as an HPE Integrated System for Azure Local for seamless integration between on-premises infrastructure and cloud services. The HPE ProLiant DL145 Gen11 server is compact, resilient, quiet, and adaptable to fit on any tabletop, wall, or cabinet, making it ideal for any edge environment across retail, financial services, manufacturing, and healthcare. The server is easy to deploy with plug-and-play installation and blends in seamlessly in a customer's environment while delivering high performance for data processing and real-time insights. The HPE ProLiant DL145 Gen11 servers also offer HPE Compute Ops Management to securely access, monitor, and manage servers, no matter where they live, and are pre-configured with the latest software and security updates to deploy systems quickly. Availability HPE ProLiant DL145 for Azure Local Integrated System is available today. HPE ProLiant Compute DL325 and DL345 Gen12 servers are available for order today and will ship in July. HPE Morpheus VM Essentials software is available today.

Biwin DW100 DDR5 192 GB Memory Kit Announced
Biwin DW100 DDR5 192 GB Memory Kit Announced

TECHx

time18-06-2025

  • TECHx

Biwin DW100 DDR5 192 GB Memory Kit Announced

Home » Emerging technologies » Storage » Biwin DW100 DDR5 192 GB Memory Kit Announced Biwin has announced the launch of its latest high-capacity memory solution, the Biwin Black Opal OC Lab Gold Edition DW100 RGB DDR5 192 GB Memory Kit. Revealed on June 18, 2025, in Shenzhen, this memory kit delivers a 192 GB (48 GB x4) configuration. It is built on the DDR5-6000 CL28-36-36-102 1.4V specification. Biwin, a global innovator in memory and storage technologies, reported that this new kit breaks the traditional capacity limits of consumer memory. It is designed to meet the performance needs of AI computing, large-scale data processing, and next-generation workloads. The DW100 enables users to harness DDR5's enhanced data throughput. This supports fast, out-of-the-box speeds for AI computing, large language models (LLMs), generative AI, and edge computing tasks. It is engineered for ultra-low latency and maximum system responsiveness. Featuring DDR5-6000 CL28 speeds, it enhances performance using optimized memory timings and improved signal integrity. Biwin highlighted that DDR5 6000 MT/s is considered the 'sweet spot' for AMD platforms. This ensures efficient memory scaling, stable operations, and improved system efficiency across demanding workloads. Key performance benefits include: CL28 latency for faster access and greater stability Optimized for parallel computing and real-time AI inference The memory kit is compatible with MSI, ASUS, and Gigabyte's X870 and B850 motherboards. Biwin advised users to consult official motherboard websites for detailed compatibility. Additionally, the kit supports AMD EXPO. This allows for effortless memory tuning and overclocking via BIOS for optimal performance on next-generation AMD platforms. The Biwin DW100 memory kit will be available in select regions starting late June 2025. It is expected to be priced at approximately $849. This high-performance DDR5 memory kit is targeted at professionals, AI developers, and tech enthusiasts. It combines ultra-high capacity, low-latency performance, and strong overclocking potential. For detailed product information, Biwin recommends visiting The product has been developed by Biwin's OC Lab, which focuses on elite overclocking performance. The lab selects top-grade semiconductor materials and pushes the limits of memory design to exceed traditional benchmarks.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store