logo
MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025

MiTAC Computing Launches the Latest Scale-out AI Server G4527G6 by NVIDIA MGX at COMPUTEX 2025

Korea Herald19-05-2025
Featuring Next-gen NVIDIA' SuperNIC
TAIPEI, May 19, 2025 /PRNewswire/ -- MiTAC Computing Technology Corporation, a leading server platform design, manufacturer, and a subsidiary of MiTAC Holdings Corporation (TSE:3706), will present its latest innovations in AI infrastructure at COMPUTEX 2025. At booth M1110, MiTAC Computing will display its next-level AI server platforms MiTAC G4527G6, fully optimized for NVIDIA MGX architecture, which supports NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and the NVIDIA H200 NVL platform to address the evolving demands of enterprise AI workloads.
Next-Gen AI with High-Performance Computing
With the increasing adoption of generative AI and accelerated computing, MiTAC Computing introduces the latest NVIDIA MGX-based server solution, the MiTAC G4527G6, designed to support complex AI and high-performance computing (HPC) workloads. Built on Intel® Xeon® 6 processors, the G4527G6 accommodates up to eight NVIDIA GPUs, 8TB of DDR5-6400 memory, sixteen hot-swappable E1.s drives, and an NVIDIA BlueField -3 DPU for efficient north-south connectivity. Crucially, it integrates four next-generation NVIDIA ConnectX -8 SuperNICs, delivering up to 800 gigabits per second (Gb/s) of NVIDIA InfiniBand and Ethernet networking—significantly enhancing system performance for AI factories and cloud data center environments.
As a key part of NVIDIA's AI networking portfolio, the NVIDIA ConnectX-8 SuperNIC delivers robust and scalable connectivity with advanced congestion control and In-Network Computing via NVIDIA SHARP, optimizing throughput for training, inference, and trillion-parameter AI workloads in sustainable, GPU-dense environments.
Powering the NVIDIA Enterprise AI Factory with Scalable Infrastructure
As data centers become the modern computers of the world, MiTAC Computing stands alongside NVIDIA in building enterprise AI factories with an on-premises, full-stack platform optimized for next-gen enterprise AI. MiTAC Computing's G4527G6 AI server is a standout example built on the modular NVIDIA MGX architecture, delivering over 100 customizable configurations to accelerate AI factories.
The MiTAC G4527G6 RTX PRO Blackwell server integrates NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs — part of the new NVIDIA Enterprise AI Factory validated design — or NVIDIA H200 NVL GPUs, which deliver up to 1.8X faster LLM inference and 1.3X improved HPC performance over the previous generation. This robust configuration is designed to support a wide range of AI-enabled enterprise applications, agentic and physical AI workflows, autonomous decision-making, and real-time data analysis – laying the foundation for the intelligent enterprises of tomorrow.
Join MiTAC Computing at COMPUTEX 2025 – Booth M1110
Preview our COMPUTEX 2025 new launches: https://www.mitaccomputing.com/en/campaign/computex2025
About MiTAC Computing Technology Corporation
MiTAC Computing Technology Corp., a subsidiary of MiTAC Holdings, delivers comprehensive, energy-efficient server solutions backed by industry expertise dating back to the 1990s. Specializing in AI, HPC, cloud, and edge computing, MiTAC Computing employs rigorous methods to ensure uncompromising quality not just at the barebone level but, more importantly, at the system and rack levels—where true performance and integration matter most. This commitment to quality at every level sets MiTAC Computing apart from others in the industry. The company provides tailored platforms for hyperscale data centers, HPC, and AI applications, guaranteeing optimal performance and scalability.
With a global presence and end-to-end capabilities—from R&D and manufacturing to global support—MiTAC Computing offers flexible, high-quality solutions designed to meet unique business needs. Leveraging the latest advancements in AI and liquid cooling, along with the recent integration of Intel DSG and TYAN server products, MiTAC Computing stands out for its innovation, efficiency, and reliability, empowering businesses to tackle future challenges.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Seoul shares reach near 4-yr high on strong chip gains
Seoul shares reach near 4-yr high on strong chip gains

Korea Herald

time5 days ago

  • Korea Herald

Seoul shares reach near 4-yr high on strong chip gains

South Korean stocks rose for the fourth consecutive session Thursday to climb to a near four-year high, driven by overnight gains in US artificial intelligence chip giant Nvidia that lifted semiconductor shares. The local currency gained against the US dollar. The benchmark Korea Composite Stock Price Index added 49.49 points, or 1.58 percent, to close at 3,183.23, marking the highest closing level since Sept. 7, 2021, when the index finished at 3,187.42. The KOSPI also extended its winning streak to a fourth straight session, which began Monday. Trade volume was moderate at 589.8 million shares worth 14 trillion won (US$10.2 billion), with gainers beating decliners 597 to 287. Foreign and institutional investors led the rally, scooping up a net 445.8 billion won and 41.6 billion won worth of stocks, respectively. Individuals dumped a net 560 billion won. In the US market, Nvidia became the world's first company to hit $4 trillion in market value on Wednesday, pushing up the tech-heavy Nasdaq to an all-time high. In Seoul, semiconductor and internet shares were among the biggest winners. SK hynix, a key supplier to Nvidia, jumped 5.69 percent to 297,000 won, and Samsung Electronics, the world's largest memory chip maker, gained 0.99 percent to 61,000 won. Naver, No. 1 internet platform company, increased 2.17 percent to 259,500 won, and its rival Kakao climbed 0.50 percent to 60,800 won. Pharmaceutical stocks also finished in positive territory, with industry leader Samsung Biologics surging 6.09 percent to 1,080,000 won, while SK biopharm advanced 5.54 percent to 99,000 won. Samyang Foods, best known for its global hit Buldak spicy ramyeon, added 1.28 percent to a record 1,498,000 won. However, Hybe, the management agency behind global superstars BTS, fell 0.9 percent to 274,500 won as its founder Bang Si-hyuk is set to face criminal charges for allegedly engaging in illegal transactions ahead of the company's initial public offering in 2020. The local currency was quoted at 1,370.0 won against the greenback at 3:30 p.m., up 5.0 won from the previous session. (Yonhap)

Nvidia becomes first company to reach $4tr in value
Nvidia becomes first company to reach $4tr in value

Korea Herald

time6 days ago

  • Korea Herald

Nvidia becomes first company to reach $4tr in value

NEW YORK (AFP) -- Nvidia became the first company to touch $4 trillion in market value on Wednesday, a new milestone in Wall Street's bet that artificial intelligence will transform the economy. Shortly after the stock market opened, Nvidia vaulted as high as $164.42, giving it a valuation above $4 trillion. The stock subsequently edged lower, ending just under the record threshold. "The market has an incredible certainty that AI is the future," said Steve Sosnick of Interactive Brokers. "Nvidia is certainly the company most positioned to benefit from that gold rush." Nvidia, led by electrical engineer Jensen Huang, now has a market value greater than the GDP of France, Britain or India, a testament to investor confidence that AI will spur a new era of robotics and automation. The California chip company's latest surge is helping drive a recovery in the broader stock market, as Nvidia itself outperforms major indices. Part of this is due to relief that President Donald Trump has walked back his most draconian tariffs, which pummeled global markets in early April. Even as Trump announced new tariff actions in recent days, US stocks have stayed at lofty levels, with the tech-centered Nasdaq ending at a fresh record on Wednesday. "You've seen the markets walk us back from a worst-case scenario in terms of tariffs," said Angelo Zino, technology analyst at CFRA Research. While Nvidia still faces US export controls to China as well as broader tariff uncertainty, the company's deal to build AI infrastructure in Saudi Arabia during a Trump state visit in May showed a potential upside in the US president's trade policy. "We've seen the administration using Nvidia chips as a bargaining chip," Zino said. Nvidia's surge to $4 trillion marks a new benchmark in a fairly consistent rise over the last two years as AI enthusiasm has built. In 2025 so far, the company's shares have risen more than 21 percent, whereas the Nasdaq has gained 6.7 percent. Taiwan-born Huang has wowed investors with a series of advances, including its core product: graphics processing units, key to many of the generative AI programs behind autonomous driving, robotics and other cutting-edge domains. The company has also unveiled its Blackwell next-generation technology allowing more super processing capacity. One of its advances is "real-time digital twins," significantly speeding production development time in manufacturing, aerospace and myriad other sectors. However, Nvidia's winning streak was challenged early in 2025 when China-based DeepSeek shook up the world of generative AI with a low-cost, high-performance model that challenged the hegemony of OpenAI and other big-spending behemoths. Nvidia's lost some $600 billion in market valuation in a single session during this period. Huang has welcomed DeepSeek's presence, while arguing against US export constraints. In the most recent quarter, Nvidia reported earnings of nearly $19 billion despite a $4.5 billion hit from US export controls limiting sales of cutting-edge technology to China. The first-quarter earnings period also revealed that momentum for AI remained strong. Many of the biggest tech companies -- Microsoft, Google, Amazon and Meta -- are jostling to come out on top in the multi-billion-dollar AI race. A recent UBS survey of technology executives showed Nvidia widening its lead over rivals. Zino said Nvidia's latest surge reflected a fuller understanding of DeepSeek, which has ultimately stimulated investment in complex reasoning models but not threatened Nvidia's business. Nvidia is at the forefront of "AI agents," the current focus in generative AI in which machines are able to reason and infer more than in the past, he said. "Overall the demand landscape has improved for 2026 for these more complex reasoning models," Zino said. But the speedy growth of AI will also be a source of disruption. Executives at Ford, JPMorgan Chase and Amazon are among those who have begun to say the "quiet part out loud," according to a Wall Street Journal report recounting recent public acknowledgment of white-collar job loss due to AI. Shares of Nvidia closed the day at $162.88, up 1.8 percent, finishing at just under $4 trillion in market value.

WEKA Debuts NeuralMesh Axon For Exascale AI Deployments
WEKA Debuts NeuralMesh Axon For Exascale AI Deployments

Korea Herald

time08-07-2025

  • Korea Herald

WEKA Debuts NeuralMesh Axon For Exascale AI Deployments

New Offering Delivers a Unique Fusion Architecture That's Being Leveraged by Industry-Leading AI Pioneers Like Cohere, CoreWeave, and NVIDIA to Deliver Breakthrough Performance Gains and Reduce Infrastructure Requirements For Massive AI Training and Inference Workloads PARIS and CAMPBELL, Calif., July 8, 2025 /PRNewswire/ -- From RAISE SUMMIT 2025: WEKA unveiled NeuralMesh Axon, a breakthrough storage system that leverages an innovative fusion architecture designed to address the fundamental challenges of running exascale AI applications and workloads. NeuralMesh Axon seamlessly fuses with GPU servers and AI factories to streamline deployments, reduce costs, and significantly enhance AI workload responsiveness and performance, transforming underutilized GPU resources into a unified, high-performance infrastructure layer. Building on the company's recently announced NeuralMesh storage system, the new offering enhances its containerized microservices architecture with powerful embedded functionality, enabling AI pioneers, AI cloud and neocloud service providers to accelerate AI model development at extreme scale, particularly when combined with NVIDIA AI Enterprise software stacks for advanced model training and inference optimization. NeuralMesh Axon also supports real-time reasoning, with significantly improved time-to-first-token and overall token throughput, enabling customers to bring innovations to market faster. AI Infrastructure Obstacles Compound at Exascale Performance is make-or-break for large language model (LLM) training and inference workloads, especially when running at extreme scale. Organizations that run massive AI workloads on traditional storage architectures, which rely on replication-heavy approaches, waste NVMe capacity, face significant inefficiencies, and struggle with unpredictable performance and resource allocation. The reason? Traditional architectures weren't designed to process and store massive volumes of data in real-time. They create latency and bottlenecks in data pipelines and AI workflows that can cripple exascale AI deployments. Underutilized GPU servers and outdated data architectures turn premium hardware into idle capital, resulting in costly downtime for training workloads. Inference workloads struggle with memory-bound barriers, including key-value (KV) caches and hot data, resulting in reduced throughput and increased infrastructure strain. Limited KV cache offload capacity creates data access bottlenecks and complicates resource allocation for incoming prompts, directly impacting operational expenses and time-to-insight. Many organizations are transitioning to NVIDIA accelerated compute servers, paired with NVIDIA AI Enterprise software, to address these challenges. However, without modern storage integration, they still encounter significant limitations in pipeline efficiency and overall GPU utilization. Built For The World's Largest and Most Demanding Accelerated Compute Environments To address these challenges, NeuralMesh Axon's high-performance, resilient storage fabric fuses directly into accelerated compute servers by leveraging local NVMe, spare CPU cores, and its existing network infrastructure. This unified, software-defined compute and storage layer delivers consistent microsecond latency for both local and remote workloads—outpacing traditional local protocols like NFS. Additionally, when leveraging WEKA's Augmented Memory Grid capability, it can provide near-memory speeds for KV cache loads at massive scale. Unlike replication-heavy approaches that squander aggregate capacity and collapse under failures, NeuralMesh Axon's unique erasure coding design tolerates up to four simultaneous node losses, sustains full throughput during rebuilds, and enables predefined resource allocation across the existing NVMe, CPU cores, and networking resources—transforming isolated disks into a memory-like storage pool at exascale and beyond while providing consistent low latency access to all addressable data. Cloud service providers and AI innovators operating at exascale require infrastructure solutions that can match the exponential growth in model complexity and dataset sizes. NeuralMesh Axon is specifically designed for organizations operating at the forefront of AI innovation that require immediate, extreme-scale performance rather than gradual scaling over time. This includes AI cloud providers and neoclouds building AI services, regional AI factories, major cloud providers developing AI solutions for enterprise customers, and large enterprise organizations deploying the most demanding AI inference and training solutions that must agilely scale and optimize their AI infrastructure investments to support rapid innovation cycles. Delivering Game-Changing Performance for Accelerated AI Innovation Early adopters, including Cohere, the industry's leading security-first enterprise AI company, are already seeing transformational results. Cohere is among WEKA's first customers to deploy NeuralMesh Axon to power its AI model training and inference workloads. Faced with high innovation costs, data transfer bottlenecks, and underutilized GPUs, Cohere first deployed NeuralMesh Axon in the public cloud to unify its AI stack and streamline operations. "For AI model builders, speed, GPU optimization, and cost-efficiency are mission-critical. That means using less hardware, generating more tokens, and running more models—without waiting on capacity or migrating data," said Autumn Moulder, vice president of engineering at Cohere. "Embedding WEKA's NeuralMesh Axon into our GPU servers enabled us to maximize utilization and accelerate every step of our AI pipelines. The performance gains have been game-changing: Inference deployments that used to take five minutes can occur in 15 seconds, with 10 times faster checkpointing. Our team can now iterate on and bring revolutionary new AI models, like North, to market with unprecedented speed." To improve training and help develop North, Cohere's secure AI agents platform, the company is deploying WEKA's NeuralMesh Axon on CoreWeave Cloud, creating a robust foundation to support real-time reasoning and deliver exceptional experiences for Cohere's end customers. "We're entering an era where AI advancement transcends raw compute alone—it's unleashed by intelligent infrastructure design. CoreWeave is redefining what's possible for AI pioneers by eliminating the complexities that constrain AI at scale," said Peter Salanki, CTO and co-founder at CoreWeave. "With WEKA's NeuralMesh Axon seamlessly integrated into CoreWeave's AI cloud infrastructure, we're bringing processing power directly to data, achieving microsecond latencies that reduce I/O wait time and deliver more than 30 GB/s read, 12 GB/s write, and 1 million IOPS to an individual GPU server. This breakthrough approach increases GPU utilization and empowers Cohere with the performance foundation they need to shatter inference speed barriers and deliver advanced AI solutions to their customers." "AI factories are defining the future of AI infrastructure built on NVIDIA accelerated compute and our ecosystem of NVIDIA Cloud Partners," said Marc Hamilton, vice president of solutions architecture and engineering at NVIDIA. "By optimizing inference at scale and embedding ultra-low latency NVMe storage close to the GPUs, organizations can unlock more bandwidth and extend the available on-GPU memory for any capacity. Partner solutions like WEKA's NeuralMesh Axon deployed with CoreWeave provide a critical foundation for accelerated inferencing while enabling next-generation AI services with exceptional performance and cost efficiency." The Benefits of Fusing Storage and Compute For AI Innovation NeuralMesh Axon delivers immediate, measurable improvements for AI builders and cloud service providers operating at exascale, including: "The infrastructure challenges of exascale AI are unlike anything the industry has faced before. At WEKA, we're seeing organizations struggle with low GPU utilization during training and GPU overload during inference, while AI costs spiral into millions per model and agent," said Ajay Singh, chief product officer at WEKA. "That's why we engineered NeuralMesh Axon, born from our deep focus on optimizing every layer of AI infrastructure from the GPU up. Now, AI-first organizations can achieve the performance and cost efficiency required for competitive AI innovation when running at exascale and beyond." Availability NeuralMesh Axon is currently available in limited release for large-scale enterprise AI and neocloud customers, with general availability scheduled for fall 2025. For more information, visit: About WEKA WEKA is transforming how organizations build, run, and scale AI workflows through NeuralMesh™, its intelligent, adaptive mesh storage system. Unlike traditional data infrastructure, which becomes more fragile as AI environments expand, NeuralMesh becomes faster, stronger, and more efficient as it scales, growing with your AI environment to provide a flexible foundation for enterprise and agentic AI innovation. Trusted by 30% of the Fortune 50 and the world's leading neoclouds and AI innovators, NeuralMesh maximizes GPU utilization, accelerates time to first token, and lowers the cost of AI innovation. Learn more at or connect with us on LinkedIn and X.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store