logo
#

Latest news with #NVIDIAAI

HPE Unveils New AI Factory Solutions for Enterprises
HPE Unveils New AI Factory Solutions for Enterprises

TECHx

time25-06-2025

  • Business
  • TECHx

HPE Unveils New AI Factory Solutions for Enterprises

Home » Emerging technologies » Artificial Intelligence » HPE Unveils New AI Factory Solutions for Enterprises Hewlett Packard Enterprise (HPE) has announced new solutions aimed at accelerating the creation, adoption, and management of AI factories across all organizational types and the entire AI lifecycle. HPE revealed the expansion of its NVIDIA AI Computing by HPE portfolio, now including NVIDIA Blackwell GPUs. The update introduces new composable solutions designed for service providers, model builders, and sovereign entities. It also includes the next-generation HPE Private Cloud AI, a turnkey AI factory for enterprises. The new end-to-end solutions eliminate the need for customers to compile their own AI tech stack when building AI-ready data centers. According to HPE President and CEO Antonio Neri, achieving AI potential requires strong infrastructure and the right IT foundation. He stated that HPE and NVIDIA offer a comprehensive approach to support organizations in realizing sustainable business value through AI. Jensen Huang, founder and CEO of NVIDIA, reported that HPE and NVIDIA are jointly delivering full-stack AI factory infrastructure to help businesses innovate at scale with speed and precision. HPE's Private Cloud AI offers a fully integrated solution featuring NVIDIA accelerated computing, networking, and software. It supports: NVIDIA Blackwell GPUs and HPE ProLiant Compute Gen12 servers Investment protection and seamless GPU scalability Air-gapped management for data privacy and multi-tenancy for collaboration It also includes NVIDIA AI Blueprints and a 'try and buy' program at Equinix data centers. HPE introduced new validated AI factory solutions leveraging five decades of liquid cooling expertise and HPE Morpheus Enterprise Software. These modular stacks offer a unified control plane and faster deployment. HPE OpsRamp now provides full-stack observability and is validated for the NVIDIA Enterprise AI Factory. Additional AI factory models include: A large-scale design for service providers and model builders using HPE ProLiant XD, NVIDIA AI Enterprise, and advanced cooling A solution for sovereign entities with enhanced privacy and sovereignty features The HPE Compute XD690, which supports eight NVIDIA Blackwell Ultra GPUs, has also been added to the portfolio. It includes the HPE Performance Cluster Manager for managing complex AI environments. To support data-hungry AI workloads, HPE Alletra Storage MP X10000 now supports Model Context Protocol (MCP) servers. The system accelerates AI data pipelines and supports the NVIDIA AI Data Platform reference design. HPE's Unleash AI ecosystem has expanded to 75+ use cases and added 26 new partners. These use cases span agentic AI, smart cities, data governance, and cybersecurity. Additionally, HPE and Accenture are co-developing agentic AI solutions for the financial sector. The collaboration uses Accenture's AI Refinery on HPE Private Cloud AI to explore applications in sourcing, spend management, and contract analysis. To support customer adoption, HPE has introduced new services to design, finance, deploy, and manage AI factories. These offerings aim to simplify AI journeys from planning to long-term operation. HPE Financial Services is also offering flexible financing, including lower initial payments for Private Cloud AI and options to fund new AI projects using existing tech assets. HPE continues to position itself as a leader in enterprise AI by delivering complete, integrated solutions that support innovation and scale.

Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge
Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge

Business Wire

time10-06-2025

  • Business
  • Business Wire

Spectro Cloud Integrates Palette with NVIDIA DOCA and NVIDIA AI Enterprise, Empowering Seamless AI Deployment Across Telco, Enterprise, and Edge

SAN JOSE, Calif.--(BUSINESS WIRE)--Spectro Cloud, a leading provider of Kubernetes management solutions, today announced the integration of NVIDIA DOCA Platform Framework (DPF), part of NVIDIA's latest DOCA 3.0 and NVIDIA AI Enterprise software, into its Palette platform. Building on its proven track record as a trusted partner for major organizations deploying Kubernetes in the cloud, at the data center, and at the edge, Spectro Cloud continues to expand its leadership in enabling production-ready infrastructure for AI and modern applications. This integration empowers organizations to efficiently deploy and manage NVIDIA BlueField-3 DPUs alongside AI workloads across diverse environments, including telco, enterprise, and edge. Spectro Cloud is excited to meet, discuss, and demonstrate this integration at GTC Paris, June 11-12. With the integration of DPF, Palette users gain access to a suite of advanced features designed to optimize data center operations: Comprehensive provisioning and lifecycle management: Palette streamlines the deployment and management of NVIDIA BlueField-accelerated infrastructure, ensuring seamless operations across various environments. Enhanced security service deployment: With the integration of NVIDIA DOCA Argus, customers can elevate cybersecurity capabilities, providing real-time threat detection for AI workloads. DOCA Argus operates autonomously on NVIDIA BlueField, enabling runtime threat detection, agentless operation, and seamless integration into existing enterprise security platforms. Support for Advanced DOCA Networking Features: Palette now supports deployment of DOCA FLOW features, including ACL pipe, LPM pipe, CT pipe, ordered list pipe, external send queue (SQ), and pipe resize, enabling more granular control over data traffic and improved network efficiency. NVIDIA AI Enterprise-ready deployments with Palette Palette now supports NVIDIA AI Enterprise-ready deployments, streamlining how organizations operationalize AI across their infrastructure stack. With deep integration of NVIDIA AI Enterprise software components, Palette provides a turnkey experience to provision, manage, and scale AI workloads, including: NVIDIA GPU Operator Automates the provisioning, health monitoring, and lifecycle management of GPU resources in Kubernetes environments, reducing the operational burden of running GPU-intensive AI/ML workloads. NVIDIA Network Operator Delivers accelerated network performance using DOCA infrastructure. It enables low-latency, high-throughput communication critical for distributed AI inference and training workloads. NVIDIA NIM Microservices Palette simplifies the deployment of NVIDIA NIM microservices, a new class of optimized, containerized inference APIs that allow organizations to instantly serve popular foundation models, including LLMs, vision models, and ASR pipelines. With Palette, users can launch NIM endpoints on GPU-accelerated infrastructure with policy-based governance, lifecycle management, and integration into CI/CD pipelines — enabling rapid experimentation and production scaling of AI applications. NVIDIA NeMo With Palette's industry-leading declarative management, platform teams can easily define reusable cluster configurations that includes everything from NVIDIA NeMo microservices to build, customize, evaluate and guardrail LLMs; to GPU drivers and NVIDIA CUDA libraries; to the NVIDIA Dynamo Inference framework; plus PyTorch/TensorFlow, and Helm chart deployments. This approach enables a scalable, repeatable, and operationally efficient foundation for AI workloads. By integrating these components, Palette empowers teams to rapidly build, test, and deploy AI services, while maintaining enterprise-grade control and visibility. This eliminates the traditional friction of managing disparate software stacks, GPU configurations, and AI model serving infrastructure. "Integrating NVIDIA DPF into our Palette platform marks a significant step forward in delivering scalable and efficient AI infrastructure solutions," said Saad Malik, CTO and co-founder, Spectro Cloud. "Our customers can now harness the full potential of NVIDIA BlueField's latest advancements to drive accelerated networking, infrastructure optimization, AI security, and innovation across telco, enterprise, and edge environments." 'Organizations are rapidly building AI factories and need intelligent, easy-to-use infrastructure solutions to power their transformation,' said Dror Goldenberg, senior vice president of Networking Software at NVIDIA. 'Building on the DOCA Platform Framework, the Palette platform enables enterprises and telcos to deploy and operate BlueField-accelerated AI infrastructure with greater speed and efficiency.' This strategic integration positions Palette as a comprehensive platform for organizations aiming to operationalize AI at scale, including: Telco solutions: High-performance, low-latency infrastructure tailored for telecommunications applications. Enterprise deployments: Scalable and secure AI infrastructure to support diverse enterprise workloads. Edge computing: Lightweight, GPU-accelerated solutions designed for resource-constrained edge environments. Palette is available today for deployment and proof of concept (POC) projects. For more information about Spectro Cloud's Palette platform, visit Learn more about our work with NVIDIA, including technical blogs, here. About Spectro Cloud Spectro Cloud delivers simplicity and control to organizations running Kubernetes at any scale. With its Palette platform, Spectro Cloud empowers businesses to deploy, manage, and scale Kubernetes clusters effortlessly — from edge to data center to cloud — while maintaining the freedom to build their perfect stack. Trusted by leading organizations worldwide, Spectro Cloud transforms Kubernetes complexity into elegant, scalable solutions, enabling customers to master their cloud-native journey with confidence. Spectro Cloud is a Gartner Cool Vendor, CRN Tech Innovator, and a 'leader' and 'outperformer' in GigaOm's 2025 Radars for Kubernetes for Edge Computing, and Managed Kubernetes. Co-founded in 2019 by CEO Tenry Fu, Vice President of Engineering Gautam Joshi and Chief Technology Officer Saad Malik, Spectro Cloud is backed by Alter Venture Partners, Boldstart Ventures, Firebolt Ventures, Growth Equity at Goldman Sachs Alternatives, NEC and Translink Orchestrating Future Fund, Qualcomm Ventures, Sierra Ventures, Stripes, T-Mobile Ventures, TSG and WestWave Capital. For more information, visit or follow @spectrocloudinc and @spectrocloudgov on X.

IBM pledges $150 billion to boost U.S. tech growth, computer manufacturing
IBM pledges $150 billion to boost U.S. tech growth, computer manufacturing

NBC News

time28-04-2025

  • Business
  • NBC News

IBM pledges $150 billion to boost U.S. tech growth, computer manufacturing

International Business Machines Corporation on Monday announced it will invest $150 billion in the U.S. over the next five years, including more than $30 billion to advance American manufacturing of its mainframe and quantum computers. 'We have been focused on American jobs and manufacturing since our founding 114 years ago, and with this investment and manufacturing commitment we are ensuring that IBM remains the epicenter of the world's most advanced computing and AI capabilities,' IBM CEO Arvind Krishna said in a release. The company's announcement comes weeks after President Donald Trump unveiled a far-reaching and aggressive 'reciprocal' tariff policy to boost manufacturing in the U.S. As of late April, Trump has exempted chips, as well as smartphones, computers, and other tech devices and components, from the tariffs. IBM said its investment will help accelerate America's role as a global leader in computing and fuel the economy. The company said it operates the 'world's largest fleet of quantum computer systems,' and will continue to build and assemble them in the U.S., according to the release. IBM competitor Nvidia, the chipmaker that has been the primary benefactor of the artificial intelligence boom, announced a similar push earlier this month to produce its NVIDIA AI supercomputers entirely in the U.S. Nvidia plans to produce up to $500 billion of AI infrastructure in the U.S. via its manufacturing partnerships over the next four years. Last week, IBM reported better-than-expected first-quarter results. The company said it generated $14.54 billion in revenue for the period, above the $14.4 billion expected by analysts. IBM's net income narrowed to $1.06 billion, or $1.12 per share, from $1.61 billion, or $1.72 per share, in the same quarter a year ago. IBM's infrastructure division, which includes mainframe computers, posted $2.89 billion in revenue for the quarter, beating expectations of $2.76 billion. The company announced a new z17 AI mainframe earlier this month.

Cognizant to Deploy Neuro AI Platform to Accelerate Enterprise AI Adoption in Collaboration with NVIDIA
Cognizant to Deploy Neuro AI Platform to Accelerate Enterprise AI Adoption in Collaboration with NVIDIA

Globe and Mail

time25-03-2025

  • Business
  • Globe and Mail

Cognizant to Deploy Neuro AI Platform to Accelerate Enterprise AI Adoption in Collaboration with NVIDIA

Cognizant will offer solutions across key growth areas, including enterprise AI agents, tailored industry large language models and infrastructure with NVIDIA AI. TEANECK, N.J. , March 25, 2025 /CNW/ -- Cognizant (NASDAQ: CTSH) announced advancements built on NVIDIA AI aimed at accelerating the cross-industry adoption of AI technology in five key areas: enterprise AI agents, industry-specific large language models (LLMs), digital twins for smart manufacturing, foundational infrastructure for AI, and the capabilities of Cognizant's Neuro ® AI platform to integrate NVIDIA AI technology and orchestrate across the enterprise technology stack. Cognizant is working with global clients to help them scale AI value efficiently, leveraging extensive industry experience and a comprehensive AI ecosystem comprising infrastructure, data, models, and agent development powered by proprietary platforms and accelerators. NVIDIA AI plays a key role in Cognizant's AI offerings, with active client engagements underway across industries to enable growth and business transformation. "We continue to see businesses navigating the transition from proofs of concept to larger-scale implementations of enterprise AI," said Annadurai Elango , president, Core Technologies and Insights, Cognizant. "Through our collaboration with NVIDIA, Cognizant will be building and deploying solutions that accelerate this process and scale AI value faster for clients through integration of foundational AI elements, platforms and solutions." "From models to applications, enterprise AI transformation requires full-stack software and infrastructure with access to domain-specific data," said Jay Puri , executive vice president of Worldwide Field Operations, NVIDIA. "The Cognizant Neuro AI platform is built with NVIDIA AI to deliver specialized LLMs and applications to ready businesses for the era of AI with reasoning agents and digital twins." At NVIDIA GTC 2025, Cognizant presented its intent to deliver offering updates across the following five areas: Enterprise AI agentification powered by Cognizant ® Neuro AI Multi-Agent Accelerator: Running on NVIDIA NIM ™ microservices, this framework will enable clients to rapidly build and scale multi-agent AI systems for adaptive operations, real-time decision-making and personalized customer experiences. With these frameworks clients can create and orchestrate agents using a low-code framework or use pre-built agent networks for various enterprise functions and industry-specific processes such as sales, marketing, and supply chain management. The frameworks also allow clients to easily integrate third-party agent networks and most LLMs. Building multi agents for scale: Cognizant works to enhance business operations through the use of multi-agent systems and integration with NVIDIA NIM, NVIDIA Blueprints, and NVIDIA Riva speech AI. The company will be developing a future-proof agent architecture that supports modular and adaptable agent design to meet evolving needs and the long-term viability and adaptability of AI solutions. This includes pre-built integrations with security guardrails and human oversight. This approach aims to enable enterprises to develop and deploy market-ready applications tailored to their specific needs using the pre-built agent catalog. Examples include industry agents such as insurance claims underwriting multi-agent systems, appeals and grievances multi-agent systems, automated supply chain multi-agent systems and contract management multi-agent systems. Industry LLMs: Cognizant is developing industry-oriented LLMs powered by NVIDIA NeMo and NVIDIA NIM. These solutions are tailored to meet the unique needs of different industries and build on Cognizant's deep industry expertise to drive innovation and improve business outcomes. For example, Cognizant has developed a fine-tuned language model to transform healthcare administrative processes. This system will leverage Cognizant's domain expertise and NVIDIA technology to enhance medical code extraction and support higher accuracy, reduced errors, and better compliance with HIPAA and GDPR standards. It is designed to help clients cut costs, decrease latency, improve revenue cycle management and help ensure accurate risk adjustment. In internal Cognizant benchmarking, the model has demonstrated effectiveness in reducing effort by 30-75 percent, boosting coding accuracy by 30-40 percent, and accelerating time to market by 40-45 percent. Industrial digital twins: Cognizant's smart manufacturing and digital twin offerings, accelerated by NVIDIA Omniverse ™, will aim to drive digital transformation by combining NVIDIA Omniverse's synthetic data generation, accelerated computing, and physical AI simulation technologies to address challenges in manufacturing operations and supply chain management. These capabilities will be designed to assist clients in enhancing plant layout and process simulations with real-time insights and predictive analytics, while also supporting improved operational efficiency and optimized plant capital expenditure. This offering enables integration of diverse data from applications, systems and sensors with synthetic data, allowing clients to simulate various scenarios and find solutions to issues in the plant. Additionally, by building the necessary digital infrastructure, including IT systems and skilled personnel, Cognizant's offerings can be used to create and manage digital twins for large-scale systems, such as factories, smart grids, warehouses, or entire cities, with precision and efficiency. Infrastructure for AI: Implementing AI effectively requires robust AI infrastructure and data prepared for AI. Cognizant's infrastructure for AI, accelerated by NVIDIA, will provide clients access to NVIDIA AI technology via "GPU as a Service", along with secure and managed infrastructure. This helps ensure that AI models can be run in various environments, including the cloud, data centers or at the edge. Additionally, Cognizant intends to use NVIDIA RAPIDS ™ Accelerator for Apache Spark to help clients accelerate data pipelines for AI implementations, facilitating efficient and scalable operations. In one example implementation for a large healthcare client in the U.S., use of Cognizant's infrastructure for AI resulted in a 2.7x cost efficiency improvement and a 1.8x enhancement in the performance of their Spark workloads. "As we enter the era of AI industrialization, enterprises are seeking to accelerate the value velocity of their AI investments—focusing on outsized economic impact, agentic-led workflow transformation, and industry-specific deployments," said Nitish Mittal , Partner, Everest Group. "Cognizant's deepening partnership with NVIDIA signals the right trajectory for forward-thinking enterprises aiming to unlock breakthrough value in the AI era." About Cognizant Cognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we're improving everyday life. See how at or @cognizant.

Cognizant to Deploy Neuro AI Platform to Accelerate Enterprise AI Adoption in Collaboration with NVIDIA
Cognizant to Deploy Neuro AI Platform to Accelerate Enterprise AI Adoption in Collaboration with NVIDIA

Yahoo

time25-03-2025

  • Business
  • Yahoo

Cognizant to Deploy Neuro AI Platform to Accelerate Enterprise AI Adoption in Collaboration with NVIDIA

Cognizant will offer solutions across key growth areas, including enterprise AI agents, tailored industry large language models and infrastructure with NVIDIA AI. TEANECK, N.J., March 25, 2025 /CNW/ -- Cognizant (NASDAQ: CTSH) announced advancements built on NVIDIA AI aimed at accelerating the cross-industry adoption of AI technology in five key areas: enterprise AI agents, industry-specific large language models (LLMs), digital twins for smart manufacturing, foundational infrastructure for AI, and the capabilities of Cognizant's Neuro® AI platform to integrate NVIDIA AI technology and orchestrate across the enterprise technology stack. Cognizant is working with global clients to help them scale AI value efficiently, leveraging extensive industry experience and a comprehensive AI ecosystem comprising infrastructure, data, models, and agent development powered by proprietary platforms and accelerators. NVIDIA AI plays a key role in Cognizant's AI offerings, with active client engagements underway across industries to enable growth and business transformation. "We continue to see businesses navigating the transition from proofs of concept to larger-scale implementations of enterprise AI," said Annadurai Elango, president, Core Technologies and Insights, Cognizant. "Through our collaboration with NVIDIA, Cognizant will be building and deploying solutions that accelerate this process and scale AI value faster for clients through integration of foundational AI elements, platforms and solutions." "From models to applications, enterprise AI transformation requires full-stack software and infrastructure with access to domain-specific data," said Jay Puri, executive vice president of Worldwide Field Operations, NVIDIA. "The Cognizant Neuro AI platform is built with NVIDIA AI to deliver specialized LLMs and applications to ready businesses for the era of AI with reasoning agents and digital twins." At NVIDIA GTC 2025, Cognizant presented its intent to deliver offering updates across the following five areas: Enterprise AI agentification powered by Cognizant® Neuro AI Multi-Agent Accelerator: Running on NVIDIA NIM™ microservices, this framework will enable clients to rapidly build and scale multi-agent AI systems for adaptive operations, real-time decision-making and personalized customer experiences. With these frameworks clients can create and orchestrate agents using a low-code framework or use pre-built agent networks for various enterprise functions and industry-specific processes such as sales, marketing, and supply chain management. The frameworks also allow clients to easily integrate third-party agent networks and most LLMs. Building multi agents for scale: Cognizant works to enhance business operations through the use of multi-agent systems and integration with NVIDIA NIM, NVIDIA Blueprints, and NVIDIA Riva speech AI. The company will be developing a future-proof agent architecture that supports modular and adaptable agent design to meet evolving needs and the long-term viability and adaptability of AI solutions. This includes pre-built integrations with security guardrails and human oversight. This approach aims to enable enterprises to develop and deploy market-ready applications tailored to their specific needs using the pre-built agent catalog. Examples include industry agents such as insurance claims underwriting multi-agent systems, appeals and grievances multi-agent systems, automated supply chain multi-agent systems and contract management multi-agent systems. Industry LLMs: Cognizant is developing industry-oriented LLMs powered by NVIDIA NeMo and NVIDIA NIM. These solutions are tailored to meet the unique needs of different industries and build on Cognizant's deep industry expertise to drive innovation and improve business outcomes. For example, Cognizant has developed a fine-tuned language model to transform healthcare administrative processes. This system will leverage Cognizant's domain expertise and NVIDIA technology to enhance medical code extraction and support higher accuracy, reduced errors, and better compliance with HIPAA and GDPR standards. It is designed to help clients cut costs, decrease latency, improve revenue cycle management and help ensure accurate risk adjustment. In internal Cognizant benchmarking, the model has demonstrated effectiveness in reducing effort by 30-75 percent, boosting coding accuracy by 30-40 percent, and accelerating time to market by 40-45 percent. Industrial digital twins: Cognizant's smart manufacturing and digital twin offerings, accelerated by NVIDIA Omniverse™, will aim to drive digital transformation by combining NVIDIA Omniverse's synthetic data generation, accelerated computing, and physical AI simulation technologies to address challenges in manufacturing operations and supply chain management. These capabilities will be designed to assist clients in enhancing plant layout and process simulations with real-time insights and predictive analytics, while also supporting improved operational efficiency and optimized plant capital expenditure. This offering enables integration of diverse data from applications, systems and sensors with synthetic data, allowing clients to simulate various scenarios and find solutions to issues in the plant. Additionally, by building the necessary digital infrastructure, including IT systems and skilled personnel, Cognizant's offerings can be used to create and manage digital twins for large-scale systems, such as factories, smart grids, warehouses, or entire cities, with precision and efficiency. Infrastructure for AI: Implementing AI effectively requires robust AI infrastructure and data prepared for AI. Cognizant's infrastructure for AI, accelerated by NVIDIA, will provide clients access to NVIDIA AI technology via "GPU as a Service", along with secure and managed infrastructure. This helps ensure that AI models can be run in various environments, including the cloud, data centers or at the edge. Additionally, Cognizant intends to use NVIDIA RAPIDS™ Accelerator for Apache Spark to help clients accelerate data pipelines for AI implementations, facilitating efficient and scalable operations. In one example implementation for a large healthcare client in the U.S., use of Cognizant's infrastructure for AI resulted in a 2.7x cost efficiency improvement and a 1.8x enhancement in the performance of their Spark workloads. "As we enter the era of AI industrialization, enterprises are seeking to accelerate the value velocity of their AI investments—focusing on outsized economic impact, agentic-led workflow transformation, and industry-specific deployments," said Nitish Mittal, Partner, Everest Group. "Cognizant's deepening partnership with NVIDIA signals the right trajectory for forward-thinking enterprises aiming to unlock breakthrough value in the AI era." About CognizantCognizant (Nasdaq: CTSH) engineers modern businesses. We help our clients modernize technology, reimagine processes and transform experiences so they can stay ahead in our fast-changing world. Together, we're improving everyday life. See how at or @cognizant. For more information, contact: U.S. Name Ben Gorelick Email Europe / APAC Name Christina Schneider Email India Name Rashmi Vasisht Email View original content to download multimedia: SOURCE Cognizant View original content to download multimedia: Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store