logo
AMD unveils new Radeon, Ryzen & AI PC innovations at Computex

AMD unveils new Radeon, Ryzen & AI PC innovations at Computex

Techday NZ21-05-2025
AMD has announced a series of updates to its product portfolio, introducing new entries to the Ryzen and Radeon lines as well as developments centred on AI-powered PCs.
At Computex 2025 in Taipei, the company presented the Radeon RX 9060 XT graphics cards, the Radeon AI PRO R9700 workstation graphics cards, and the Ryzen Threadripper 9000 Series and 9000 WX-Series processors. AMD executives outlined how these releases are positioned for gaming, professional workstations, and AI development.
The Radeon RX 9060 XT graphics cards, based on the AMD RDNA 4 architecture, will be available with either 8GB or 16GB of GDDR6 memory. According to AMD, these units deliver double the raytracing throughput of the previous generation and are targeting smooth 1440p gaming experiences. The 8GB model will start at USD $299 and the 16GB version at USD $349, with availability from board partners expected later in the year.
Jack Huynh, Senior Vice President and General Manager, Computing and Graphics Group at AMD, commented on the scale of the product introductions, stating, "These announcements underscore our commitment to continue delivering industry-leading innovation across our product portfolio. The Radeon RX 9060 XT and Radeon AI PRO R9700 bring the performance and AI capabilities of RDNA 4 to workstations and gamers all around the world, while our new Ryzen Threadripper 9000 Series sets the new standard for high-end desktops and professional workstations. Together, these solutions represent our vision for empowering creators, gamers, and professionals with the performance and efficiency to push boundaries and drive creativity."
The Radeon RX 9060 XT, designed for demanding gaming environments, features 32 RDNA 4 compute units. AMD reports that this model supports accelerated raytracing, enhanced by its increased throughput and FidelityFX Super Resolution 4 (FSR 4) machine learning upscaling technology. FSR 4 has been developed to raise both frame rates and visual fidelity in all rendering conditions.
AMD's newly launched Radeon AI PRO R9700 GPU is designed for professional AI development and workstation tasks. With 32GB of memory, 64 compute units, and PCIe Gen 5 support, the graphics card is aimed at data-heavy workflows such as local AI inference, model finetuning, and scalable compute in multi-GPU configurations. The company claims the second-generation AI accelerators in this card offer up to twice the throughput of the previous generation.
Availability for the Radeon AI PRO R9700 is set for July 2025, with AMD indicating ongoing efforts to expand high-performance GPU acceleration to more AI and compute workloads through expanded AMD ROCm on Radeon support.
The Ryzen Threadripper 9000 Series and 9000 WX-Series processors form the latest chapter in AMD's workstation strategy. These chips make use of the Zen 5 architecture and support record-setting core counts, including the Ryzen Threadripper PRO 9995WX which houses 96 cores and 192 threads. The processors offer up to 384MB of L3 cache and 128 PCIe 5.0 lanes, features that are oriented towards resource-intensive scenarios like VFX rendering, physics simulation, and large-scale AI model development. Enterprise-grade AMD PRO Technologies are integrated to enhance security, manageability, and platform stability.
System integrators and major manufacturers including Dell, HP, Lenovo and Supermicro are expected to offer products equipped with the new Ryzen Threadripper PRO 9000 WX-Series processors later this year. DIY and retail platforms for the 9000 Series are scheduled to follow in July 2025.
AMD is also continuing its partnership approach in the AI PC segment. One element of this is the new ASUS Expert P Series Copilot+ PCs, which are powered by up to AMD Ryzen AI PRO 300 Series processors boasting over 50 TOPS of NPU performance. These units are aimed at providing fast AI-enhanced productivity, enterprise security, and manageability for corporate environments.
S.Y. Hsu, Co-CEO of ASUS, stated, "We're proud to deepen our collaboration with AMD as we usher in a new era of AI-powered computing. With the addition of the new Expert series — built from the ground up to revolutionise performance and efficiency for the modern workplace — to our broad AI PC portfolio, and commitment to innovation, we aim to deliver next-gen AI experiences that empower users everywhere."
Luca Rossi, President, Intelligent Devices Group, Lenovo, added, "At Lenovo, we're committed to delivering AI PCs that are not only powerful, but truly personal and productive. Our long-standing collaboration with AMD continues to drive this vision forward — from high-performance laptops to innovative workstations. Together, we're enabling faster, smarter computing experiences for every kind of user. We're especially excited about what's coming next in our ThinkStation P8 workstation, where AMD's latest high-performance Ryzen Threadripper PRO processors will unlock new possibilities for creators and professionals alike."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Vultr launches early access to AMD Instinct MI355X GPU for AI
Vultr launches early access to AMD Instinct MI355X GPU for AI

Techday NZ

time18-06-2025

  • Techday NZ

Vultr launches early access to AMD Instinct MI355X GPU for AI

Vultr has announced the availability of the AMD Instinct MI355X GPU as part of its cloud infrastructure services. As one of the first cloud providers to integrate the new AMD Instinct MI355X GPU, Vultr is now taking pre-orders for early access, with global availability scheduled for the third quarter of the year. The GPU forms part of AMD's latest focus on high-capacity computational demands, catering to artificial intelligence (AI) workloads as well as enterprise-scale applications. Product features The AMD Instinct MI355X GPU is based on AMD's 4th Generation CDNA architecture. According to Vultr, this GPU features 288 GB of HBM3E memory, delivers up to 8 TB/s of memory bandwidth, and supports expanded datatypes such as FP6 and FP4. These improvements are designed to address complex tasks ranging from AI training and inference to scientific simulations within high-performance computing (HPC) environments. For customers operating within higher-density data environments, the Instinct MI355X supports direct liquid cooling (DLC). This enhancement offers increased thermal efficiency, which is intended to unlock greater computing performance per rack and facilitate advanced, scalable cooling strategies. The GPU is also supported by the latest version of AMD's ROCm software, which further optimises tasks related to AI inference, training, and compatibility with various frameworks. This results in improved throughput and reduced latency for critical operations. AMD and Vultr partnership Vultr's portfolio already includes other AMD offerings, such as the AMD EPYC 9004 Series and EPYC 7003 Series central processing units (CPUs), as well as previous GPU models like the Instinct MI325X and MI300X. Customers using the MI355X in combination with AMD EPYC 4005 Series CPUs will benefit from a fully supported computing stack across both processing and acceleration functions, streamlining high-powered workloads from end to end. Negin Oliver, Corporate Vice President of Business Development, Data Centre GPU Business at AMD, stated: "AMD is the trusted AI solutions provider of choice, enabling customers to tackle the most ambitious AI initiatives, from building large-scale AI cloud deployments to accelerating AI-powered scientific discovery. AMD Instinct MI350 series GPUs paired with AMD ROCm software provide the performance, flexibility, and security needed to deliver tailored AI solutions that meet the diverse demands of the modern AI landscape." The collaboration builds on Vultr's efforts to support a range of AMD solutions tailored for enterprise, HPC, and AI sectors, reinforcing the company's capacity to cater to evolving customer workloads. Cloud market implications J.J. Kardwell, Chief Executive Officer of Vultr, highlighted the alignment of the new GPU with market requirements. Kardwell commented: "AMD MI355X GPUs are designed to meet the diverse and complex demands of today's AI workloads, delivering exceptional value and flexibility. As AI development continues to accelerate, the scalability, security, and efficiency these GPUs deliver are more essential than ever. We are proud to be among the first cloud providers worldwide to offer AMD MI355X GPUs, empowering our customers with next-generation AI infrastructure." AMD is recognised as a member of the Vultr Cloud Alliance, which supports a collaborative ecosystem of technology providers focused on offering integrated cloud computing solutions. The introduction of the MI355X GPU follows a period of upgrades across AMD's GPU lineup, including a greater emphasis on catering to both inferencing and enterprise-scale workloads. Vultr's offering is aimed at organisations seeking advanced compute resources for AI-driven applications and scientific tasks requiring significant computational capacity. Vultr's global network reportedly serves hundreds of thousands of customers across 185 countries, supplying services in cloud compute, GPU, bare metal infrastructure and cloud storage. The addition of AMD's latest GPU to its infrastructure underlines Vultr's commitment to providing a variety of options for businesses and developers pursuing AI and HPC advancements.

Oracle unveils AMD-powered zettascale AI cluster for OCI cloud
Oracle unveils AMD-powered zettascale AI cluster for OCI cloud

Techday NZ

time13-06-2025

  • Techday NZ

Oracle unveils AMD-powered zettascale AI cluster for OCI cloud

Oracle has announced it will be one of the first hyperscale cloud providers to offer artificial intelligence (AI) supercomputing powered by AMD's Instinct MI355X GPUs on Oracle Cloud Infrastructure (OCI). The forthcoming zettascale AI cluster is designed to scale up to 131,072 MI355X GPUs, specifically architected to support high-performance, production-grade AI training, inference, and new agentic workloads. The cluster is expected to offer over double the price-performance compared to the previous generation of hardware. Expanded AI capabilities The new announcement highlights several key hardware and performance enhancements. The MI355X-powered cluster provides 2.8 times higher throughput for AI workloads. Each GPU features 288 GB of high-bandwidth memory (HBM3) and eight terabytes per second (TB/s) of memory bandwidth, allowing for the execution of larger models entirely in memory and boosting both inference and training speeds. The GPUs also support the FP4 compute standard, a four-bit floating point format that enables more efficient and high-speed inference for large language and generative AI models. The cluster's infrastructure includes dense, liquid-cooled racks, each housing 64 GPUs and consuming up to 125 kilowatts per rack to maximise performance density for demanding AI workloads. This marks the first deployment of AMD's Pollara AI NICs to enhance RDMA networking, offering next-generation high-performance and low-latency connectivity. Mahesh Thiagarajan, Executive Vice President, Oracle Cloud Infrastructure, said: "To support customers that are running the most demanding AI workloads in the cloud, we are dedicated to providing the broadest AI infrastructure offerings. AMD Instinct GPUs, paired with OCI's performance, advanced networking, flexibility, security, and scale, will help our customers meet their inference and training needs for AI workloads and new agentic applications." The zettascale OCI Supercluster with AMD Instinct MI355X GPUs delivers a high-throughput, ultra-low latency RDMA cluster network architecture for up to 131,072 MI355X GPUs. AMD claims the MI355X provides almost three times the compute power and a 50 percent increase in high-bandwidth memory over its predecessor. Performance and flexibility Forrest Norrod, Executive Vice President and General Manager, Data Center Solutions Business Group, AMD, commented on the partnership, stating: "AMD and Oracle have a shared history of providing customers with open solutions to accommodate high performance, efficiency, and greater system design flexibility. The latest generation of AMD Instinct GPUs and Pollara NICs on OCI will help support new use cases in inference, fine-tuning, and training, offering more choice to customers as AI adoption grows." The Oracle platform aims to support customers running the largest language models and diverse AI workloads. OCI users leveraging the MI355X-powered shapes can expect significant performance increases—up to 2.8 times greater throughput—resulting in faster results, lower latency, and the capability to run larger models. AMD's Instinct MI355X provides customers with substantial memory and bandwidth enhancements, which are designed to enable both fast training and efficient inference for demanding AI applications. The new support for the FP4 format allows for cost-effective deployment of modern AI models, enhancing speed and reducing hardware requirements. The dense, liquid-cooled infrastructure supports 64 GPUs per rack, each operating at up to 1,400 watts, and is engineered to optimise training times and throughput while reducing latency. A powerful head node, equipped with an AMD Turin high-frequency CPU and up to 3 TB of system memory, is included to help users maximise GPU performance via efficient job orchestration and data processing. Open-source and network advances AMD emphasises broad compatibility and customer flexibility through the inclusion of its open-source ROCm stack. This allows customers to use flexible architectures and reuse existing code without vendor lock-in, with ROCm encompassing popular programming models, tools, compilers, libraries, and runtimes for AI and high-performance computing development on AMD hardware. Network infrastructure for the new supercluster will feature AMD's Pollara AI NICs that provide advanced RDMA over Converged Ethernet (RoCE) features, programmable congestion control, and support for open standards from the Ultra Ethernet Consortium to facilitate low-latency, high-performance connectivity among large numbers of GPUs. The new Oracle-AMD collaboration is expected to provide organisations with enhanced capacity to run complex AI models, speed up inference times, and scale up production-grade AI workloads economically and efficiently.

AMD supercomputers lead Top500 rankings with record exaflops
AMD supercomputers lead Top500 rankings with record exaflops

Techday NZ

time11-06-2025

  • Techday NZ

AMD supercomputers lead Top500 rankings with record exaflops

El Capitan and Frontier, both powered by AMD processors and accelerators, have retained the top two positions on the latest Top500 list of the world's most powerful supercomputers. Supercomputing leadership The recently released Top500 rankings show that El Capitan, based at Lawrence Livermore National Laboratory, remains the fastest system globally, registering a High Performance Linpack (HPL) score of 1.742 exaflops. Frontier, situated at Oak Ridge National Laboratory, holds the second position with an HPL result of 1.353 exaflops. Both supercomputers were constructed by HPE and utilise AMD hardware at their core. El Capitan uses AMD Instinct MI300A accelerated processing units (APUs), integrating CPU and GPU functionality within a single package, aimed at supporting large-scale artificial intelligence and scientific workloads. Frontier leverages AMD EPYC CPUs alongside AMD Instinct MI250X GPUs for a variety of advanced computational research needs, including modelling in energy, climate, and next-generation artificial intelligence. Broader AMD presence AMD technologies now underpin 172 supercomputing systems out of the 500 included in the latest Top500 list. This figure represents more than a third of all the high-performance systems measured. Notably, 17 new systems joined the list this year running on AMD processors, five of which use the latest 5th Gen AMD EPYC architecture. The expanded presence spans institutions such as the University of Stuttgart's High-Performance Computing Center, where the Hunter system is powered by AMD Instinct MI300A APUs; the University of Hull's Viper supercomputer; and Italy's new EUROfusion Pitagora system at CINECA, powered by 5th Gen AMD EPYC CPUs. Performance and efficiency In addition to sheer computational power, AMD's showing on the Top500 list extends to energy efficiency. According to the most recent Green500 list, 12 of the 20 most energy-efficient supercomputers globally use AMD EPYC processors and AMD Instinct accelerators. El Capitan and Frontier ranked 26th and 32nd respectively on the Green500 index, reflecting their performance-per-watt capabilities given their computing output. This was echoed in alternative benchmarks. On the HPL-MxP test, which measures mixed-precision computing suited for artificial intelligence workloads, El Capitan debuted at the top, reaching 16.7 exaflops, with Frontier in third place and LUMI, another AMD system, in fourth. The HPCG (High-Performance Conjugate Gradient) test, a complementary performance metric for scientific applications, saw El Capitan post the highest benchmark score of 17.4 petaflops, marking it out for memory bandwidth enabled by the Instinct MI300A architecture. Institutional perspectives "From El Capitan to Frontier, AMD continues to power the world's most advanced supercomputers, delivering record-breaking performance and leadership energy efficiency," said Forrest Norrod, Executive Vice President and General Manager, Data Center Solutions Group, AMD. "With the latest Top500 list, AMD not only holds the top two spots but now powers 172 of the world's fastest systems—more than ever before—underscoring our accelerating momentum and the trust HPC leaders place in our CPUs and GPUs to drive scientific discovery and AI innovation." Rob Neely, Associate Director for Weapon Simulation and Computing at Lawrence Livermore National Laboratory, described the impact of El Capitan: "El Capitan is a transformative national resource that will dramatically expand the computational capabilities of the NNSA labs at Livermore, Los Alamos and Sandia in support of our national security and science missions. With AMD's advanced APU architecture, we can now perform simulations with the precision and confidence we set as a goal 15 years ago, when the path to exascale was difficult to foresee. As a bonus, this platform is a true 'two-fer' - an HPC and AI powerhouse that will fundamentally reshape how we fulfill our mission." Future direction The distinction on the Top500 and Green500 lists coincides with a broader shift within high performance computing, as artificial intelligence and traditional HPC workloads increasingly converge. AMD's presence in the sector demonstrates demand for scalable and efficient compute platforms amid growing power requirements for data-intensive scientific and industrial workloads. The results also indicate the use of a portfolio that includes CPUs, GPUs, and APUs to accelerate developments across domains ranging from nuclear safety and climate modelling, to training large language models and generative artificial intelligence inference.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store