Latest news with #NVIDIA-Certified
Yahoo
25-06-2025
- Business
- Yahoo
Dataiku Joins HPE Unleash AI Ecosystem to Accelerate Enterprise AI
The Universal AI Platform™ integrates with HPE's AI-optimized infrastructure to deliver fully governed, production-ready agentic AI systems NEW YORK & LAS VEGAS, June 25, 2025 (GLOBE NEWSWIRE) -- Dataiku, the Universal AI Platform™, today announced it has joined the HPE Unleash AI partner program, bringing together enterprise-ready AI orchestration and trusted infrastructure to accelerate the deployment and adoption of generative and agentic AI. With this collaboration, organizations gain a clear path to move beyond experimentation and deliver production-ready AI with the required speed, confidence, and governance to meet corporate goals and standards, without compromise. Dataiku's participation in the Unleash AI partner program is a joint commitment to enable enterprises to deploy generative, agentic, and physical AI at scale. The Universal AI Platform from Dataiku gives organizations the tools to create, connect, and control AI agents across varied business types, tech stacks, and use cases. Paired with HPE's AI-optimized infrastructure and NVIDIA's industry-specific AI Blueprints, Dataiku provides a clear path to deploying AI that drives measurable results across sectors. 'The challenge for enterprises isn't just building AI agents and GenAI apps—it's controlling how they behave, evolve, interact, and create value in the real world. Dataiku is uniquely positioned to deliver this combination of AI creation, connection, and control at enterprise scale,' said David Tharp, SVP of Partnerships at Dataiku. 'Through our work with HPE, we're enabling organizations to confidently operationalize GenAI and agentic systems with the guardrails, governance, and flexibility needed to align with enterprise standards from day one.' A Unified Foundation for Enterprise AI Innovation The alliance brings together everything enterprises need to run AI, from development and orchestration to deployment and monitoring—all in one integrated stack that is accelerated by NVIDIA Enterprise AI Factory validated design. The core foundation is formed by HPE Private Cloud AI, which is strengthened by NVIDIA-Certified Systems from HPE, like NVIDIA RTX PRO Server and NVIDIA HGX B200. As a featured agentic AI platform partner in the Unleash AI ecosystem, Dataiku enables customers to: Rapidly build and deploy generative and agentic applications. Leverage the pre-built NVIDIA AI-Q Blueprint and NVIDIA NIM microservices for copilots, digital humans, and knowledge agents. Orchestrate end-to-end AI workflows in a governed, collaborative environment. Deploy with enterprise-grade performance, security, and scalability. AI Designed for Real Business Outcomes Today, The Universal AI Platform from Dataiku is trusted by 1 in 4 of the world's top companies, based on the top 500 of the 2024 Forbes Global 2000 list (excluding China). Customers across multiple industries, including financial services, life sciences and healthcare, retail, energy, marketing, and manufacturing are achieving measurable AI outcomes with Dataiku, from optimizing processes for productivity gains to augmenting employee decision-making to transforming their business models with entirely new revenue streams. Through the HPE Unleash AI program, Dataiku customers also gain access to powerful enablement resources, such as technical workshops, joint support, and co-innovation opportunities to accelerate adoption while reducing complexity and risk. Together, Dataiku and HPE enable faster deployment, better collaboration between technical and business teams, built-in governance and transparency, and more efficient use of compute resources—all while keeping enterprise AI costs predictable and agentic systems under tight control. For more information on Dataiku's participation in the HPE Unleash AI partner program, visit: For more information on the Dataiku partner ecosystem, visit: About Dataiku Dataiku is The Universal AI Platform™, giving organizations control over their AI talent, processes, and technologies to unleash the creation of analytics, models, and agents. Agnostic by design, it integrates with all clouds, data platforms, AI services, and legacy systems to ensure full technology optionality — empowering customers to future-proof their AI initiatives. With built-in governance and no-, low-, and full-code capabilities, Dataiku enables the world's largest companies to confidently build and manage differentiated AI that drives measurable business value. Dataiku has over 1,100 employees across 13 offices worldwide, serves over 700 enterprise customers, and is backed by investors, including Wellington Management, Battery, CapitalG, ICONIQ, and FirstMark. For more, visit the Dataiku blog, LinkedIn, X, and YouTube. ###CONTACT: Kevin McLaughlin Dataiku press@ in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Straits Times
22-05-2025
- Business
- Straits Times
ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025
Enterprise-optimized reference architectures for accelerated AI infrastructure solutions TAIPEI, May 20, 2025 /PRNewswire/ -- ASUS today announced at Computex 2025 that it is pioneering the next wave of intelligent infrastructure with the launch of the NVIDIA® Enterprise AI Factory validated design, featuring advanced ASUS AI POD designs with optimized reference architectures. These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms, supporting both air-cooled and liquid-cooled data centers. Engineered to accelerate agentic AI adoption at every scale, these innovations deliver unmatched scalability, performance, and thermal efficiency, making them the ultimate choice for enterprises seeking to deploy AI at unprecedented speed and scale. ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025 NVIDIA Enterprise AI Factory with ASUS AI POD The validated NVIDIA Enterprise AI Factory with ASUS AI POD design provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform on-premises. Designed for enterprise IT, it provides accelerated computing, networking, storage, and software to help deliver faster time-to-value AI factory deployments while mitigating deployment risks. Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications. For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or NVIDIA Spectrum-XEthernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency. AI-ready racks: Scalable power for LLMs and immersive workloads ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Intel® Xeon® 6 processors and RTX PRO™ 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering exceptional expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing. In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the ASUS XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO), and consistent performance. Engineered for the AI Factory, enabling next-gen agentic AI Integrated with NVIDIA's agentic AI showcase, ASUS infrastructure supports autonomous decision-making AI, featuring real-time learning, and scalable AI agents for business applications across industries. As a global leader in AI infrastructure solutions, ASUS provides complete data center excellence with both air- and liquid-cooled options — delivering unmatched performance, efficiency, and reliability. We also deliver ultra-high-speed networking, cabling and storage rack architecture designs with NVIDIA-certified storage, RS501A-E12-RS12U as well as the VS320D series to ensure seamless scalability for AI/HPC applications. Additionally, advanced SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks optimize resource utilization, while the WEKA Parallel File System and ASUS ProGuard SAN Storage provide high-speed, scalable data handling. ASUS also provides a comprehensive software platform and services, including ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC), ensuring seamless development, orchestration, and deployment of AI models. ASUS L11/L12-validated solutions empower enterprises to deploy AI at scale with confidence through world-class deployment and support. From design to deployment, ASUS is the trusted partner for next-generation AI Factory innovation. Availability & Pricing ASUS servers are available worldwide. Please visit for more ASUS infrastructure solutions or please contact your local ASUS representative for further information.
Yahoo
20-05-2025
- Business
- Yahoo
ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025
Enterprise-optimized reference architectures for accelerated AI infrastructure solutions TAIPEI, May 19, 2025 /CNW/ -- ASUS today announced at Computex 2025 that it is pioneering the next wave of intelligent infrastructure with the launch of the NVIDIA® Enterprise AI Factory validated design, featuring advanced ASUS AI POD designs with optimized reference architectures. These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms, supporting both air-cooled and liquid-cooled data centers. Engineered to accelerate agentic AI adoption at every scale, these innovations deliver unmatched scalability, performance, and thermal efficiency, making them the ultimate choice for enterprises seeking to deploy AI at unprecedented speed and scale. NVIDIA Enterprise AI Factory with ASUS AI POD The validated NVIDIA Enterprise AI Factory with ASUS AI POD design provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform on-premises. Designed for enterprise IT, it provides accelerated computing, networking, storage, and software to help deliver faster time-to-value AI factory deployments while mitigating deployment risks. Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications. For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or NVIDIA Spectrum-X Ethernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency. AI-ready racks: Scalable power for LLMs and immersive workloads ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Intel® Xeon® 6 processors and RTX PRO™ 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering exceptional expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing. In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the ASUS XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO), and consistent performance. Engineered for the AI Factory, enabling next-gen agentic AI Integrated with NVIDIA's agentic AI showcase, ASUS infrastructure supports autonomous decision-making AI, featuring real-time learning, and scalable AI agents for business applications across industries. As a global leader in AI infrastructure solutions, ASUS provides complete data center excellence with both air- and liquid-cooled options — delivering unmatched performance, efficiency, and reliability. We also deliver ultra-high-speed networking, cabling and storage rack architecture designs with NVIDIA-certified storage, RS501A-E12-RS12U as well as the VS320D series to ensure seamless scalability for AI/HPC applications. Additionally, advanced SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks optimize resource utilization, while the WEKA Parallel File System and ASUS ProGuard SAN Storage provide high-speed, scalable data handling. ASUS also provides a comprehensive software platform and services, including ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC), ensuring seamless development, orchestration, and deployment of AI models. ASUS L11/L12-validated solutions empower enterprises to deploy AI at scale with confidence through world-class deployment and support. From design to deployment, ASUS is the trusted partner for next-generation AI Factory innovation. Availability & Pricing ASUS servers are available worldwide. Please visit for more ASUS infrastructure solutions or please contact your local ASUS representative for further information. View original content to download multimedia: SOURCE ASUS View original content to download multimedia: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Malaysian Reserve
20-05-2025
- Business
- Malaysian Reserve
ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025
Enterprise-optimized reference architectures for accelerated AI infrastructure solutions TAIPEI, May 19, 2025 /PRNewswire/ — ASUS today announced at Computex 2025 that it is pioneering the next wave of intelligent infrastructure with the launch of the NVIDIA® Enterprise AI Factory validated design, featuring advanced ASUS AI POD designs with optimized reference architectures. These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms, supporting both air-cooled and liquid-cooled data centers. Engineered to accelerate agentic AI adoption at every scale, these innovations deliver unmatched scalability, performance, and thermal efficiency, making them the ultimate choice for enterprises seeking to deploy AI at unprecedented speed and scale. NVIDIA Enterprise AI Factory with ASUS AI POD The validated NVIDIA Enterprise AI Factory with ASUS AI POD design provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform on-premises. Designed for enterprise IT, it provides accelerated computing, networking, storage, and software to help deliver faster time-to-value AI factory deployments while mitigating deployment risks. Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications. For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or NVIDIA Spectrum-X Ethernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency. AI-ready racks: Scalable power for LLMs and immersive workloads ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Intel® Xeon® 6 processors and RTX PRO™ 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering exceptional expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing. In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the ASUS XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO), and consistent performance. Engineered for the AI Factory, enabling next-gen agentic AI Integrated with NVIDIA's agentic AI showcase, ASUS infrastructure supports autonomous decision-making AI, featuring real-time learning, and scalable AI agents for business applications across industries. As a global leader in AI infrastructure solutions, ASUS provides complete data center excellence with both air- and liquid-cooled options — delivering unmatched performance, efficiency, and reliability. We also deliver ultra-high-speed networking, cabling and storage rack architecture designs with NVIDIA-certified storage, RS501A-E12-RS12U as well as the VS320D series to ensure seamless scalability for AI/HPC applications. Additionally, advanced SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks optimize resource utilization, while the WEKA Parallel File System and ASUS ProGuard SAN Storage provide high-speed, scalable data handling. ASUS also provides a comprehensive software platform and services, including ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC), ensuring seamless development, orchestration, and deployment of AI models. ASUS L11/L12-validated solutions empower enterprises to deploy AI at scale with confidence through world-class deployment and support. From design to deployment, ASUS is the trusted partner for next-generation AI Factory innovation. Availability & Pricing ASUS servers are available worldwide. Please visit for more ASUS infrastructure solutions or please contact your local ASUS representative for further information.
Yahoo
20-05-2025
- Business
- Yahoo
ASUS Announces Advanced AI POD Design Built with NVIDIA at Computex 2025
Enterprise-optimized reference architectures for accelerated AI infrastructure solutions TAIPEI, May 19, 2025 /CNW/ -- ASUS today announced at Computex 2025 that it is pioneering the next wave of intelligent infrastructure with the launch of the NVIDIA® Enterprise AI Factory validated design, featuring advanced ASUS AI POD designs with optimized reference architectures. These solutions are available as NVIDIA-Certified Systems across NVIDIA Grace Blackwell, HGX, and MGX platforms, supporting both air-cooled and liquid-cooled data centers. Engineered to accelerate agentic AI adoption at every scale, these innovations deliver unmatched scalability, performance, and thermal efficiency, making them the ultimate choice for enterprises seeking to deploy AI at unprecedented speed and scale. NVIDIA Enterprise AI Factory with ASUS AI POD The validated NVIDIA Enterprise AI Factory with ASUS AI POD design provides guidance for developing, deploying, and managing agentic AI, physical AI, and HPC workloads on the NVIDIA Blackwell platform on-premises. Designed for enterprise IT, it provides accelerated computing, networking, storage, and software to help deliver faster time-to-value AI factory deployments while mitigating deployment risks. Below are the reference architecture designs that help clients use approved practices, acting as a knowledge repository and a standardized framework for diverse applications. For massive-scale computing, the advanced ASUS AI POD, accelerated by NVIDIA GB200/GB300 NVL72 racks and incorporating NVIDIA Quantum InfiniBand or NVIDIA Spectrum-X Ethernet networking platforms, features liquid cooling to enable a non-blocking 576-GPU cluster across eight racks, or an air-cooled solution to support one rack with 72 GPUs. This ultra-dense, ultra-efficient architecture redefines AI reasoning computing performance and efficiency. AI-ready racks: Scalable power for LLMs and immersive workloads ASUS presents NVIDIA MGX-compliant rack designs with ESC8000 series featuring dual Intel® Xeon® 6 processors and RTX PRO™ 6000 Blackwell Server Edition with the latest NVIDIA ConnectX-8 SuperNIC – supporting speeds of up to 800Gb/s or other scalable configurations — delivering exceptional expandability and performance for state-of-the-art AI workloads. Integration with the NVIDIA AI Enterprise software platform provides highly-scalable, full-stack server solutions that meet the demanding requirements of modern computing. In addition, NVIDIA HGX reference architecture optimized by ASUS delivers unmatched efficiency, thermal management, and GPU density for accelerated AI fine-tuning, LLM inference, and training. Built on the ASUS XA NB3I-E12 with NVIDIA HGX B300 or ESC NB8-E11 embedded with NVIDIA HGX B200, this centralized rack solution offers unmatched manufacturing capacity for liquid-cooled or air-cooled rack systems, ensuring timely delivery, reduced total cost of ownership (TCO), and consistent performance. Engineered for the AI Factory, enabling next-gen agentic AI Integrated with NVIDIA's agentic AI showcase, ASUS infrastructure supports autonomous decision-making AI, featuring real-time learning, and scalable AI agents for business applications across industries. As a global leader in AI infrastructure solutions, ASUS provides complete data center excellence with both air- and liquid-cooled options — delivering unmatched performance, efficiency, and reliability. We also deliver ultra-high-speed networking, cabling and storage rack architecture designs with NVIDIA-certified storage, RS501A-E12-RS12U as well as the VS320D series to ensure seamless scalability for AI/HPC applications. Additionally, advanced SLURM-based workload scheduling and NVIDIA UFM fabric management for NVIDIA Quantum InfiniBand networks optimize resource utilization, while the WEKA Parallel File System and ASUS ProGuard SAN Storage provide high-speed, scalable data handling. ASUS also provides a comprehensive software platform and services, including ASUS Control Center (Data Center Edition) and ASUS Infrastructure Deployment Center (AIDC), ensuring seamless development, orchestration, and deployment of AI models. ASUS L11/L12-validated solutions empower enterprises to deploy AI at scale with confidence through world-class deployment and support. From design to deployment, ASUS is the trusted partner for next-generation AI Factory innovation. Availability & Pricing ASUS servers are available worldwide. Please visit for more ASUS infrastructure solutions or please contact your local ASUS representative for further information. View original content to download multimedia: SOURCE ASUS View original content to download multimedia: Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data