
Nutanix Enables Agentic AI Anywhere With Latest Release Of Nutanix Enterprise AI
NAI is designed to accelerate the adoption of generative AI in the enterprise by simplifying how customers build, run, and securely manage models and inferencing services at the edge, in the data centre, and in public clouds on any Cloud Native Computing Foundation® (CNCF)-certified Kubernetes® environment.
The latest NAI release extends a shared model service methodology that simplifies agentic workflows, helping to make deployment and day two operations simpler. It streamlines the resources and models required to deploy multiple applications across lines of business with a secure, common set of embedding, reranking, and guardrail functional models for agents. This builds on the NAI core, which includes a centralised LLM model repository that creates secure endpoints that make connecting generative AI applications and agents simple and private.
'Nutanix is helping customers keep up with the fast pace of innovation in the Gen AI market,' said Thomas Cornely, SVP of Product Management at Nutanix. 'We've expanded Nutanix Enterprise AI to integrate new NVIDIA NIM and NeMo microservices so that enterprise customers can securely and efficiently build, run, and manage AI Agents anywhere.'
'Enterprises require sophisticated tools to simplify agentic AI development and deployment across their operations,' said Justin Boitano, Vice President of Enterprise AI Software Products at NVIDIA. 'Integrating NVIDIA AI Enterprise software including NVIDIA NIM microservices and NVIDIA NeMo into Nutanix Enterprise AI provides a streamlined foundation for building and running powerful and secure AI agents.'
NAI for agentic applications can help customers:
Deploy Agentic AI Applications with Shared LLM Endpoints - Customers can reuse existing deployed model endpoints as shared services for multiple applications. This re-use of model endpoints helps reduce usage of critical infrastructure components, including GPUs, CPUs, memory, file and object storage, and Kubernetes® clusters.
Leverage a Wide Array of LLM Endpoints - NAI enables a range of agentic model services, including NVIDIA Llama Nemotron open reasoning models, NVIDIA NeMo Retriever and NeMo Guardrails.. NAI users can leverage NVIDIA AI Blueprints, which are pre-defined, customisable workflows, to jumpstart the development of their own AI applications that leverage NVIDIA models and AI microservices. In addition, NAI enables function calling for the configuration and consumption of external data sources to help AI agentic applications deliver more accurate and detailed results.
Support Generative AI Safety - This new NAI release will help customers implement agentic applications in ways consistent with their organisation's policies using guardrail models. These models can filter initial user queries and LLM responses to prevent biased or harmful outputs and can also maintain topic control and jailbreak attempt detection. For example, NVIDIA NeMo Guardrails are LLMs that provide content filtering to filter out unwanted content and other sensitive topics. These can also be applied to code generation, providing improved reliability and consistency across models.
Unlock Insights From Data with NVIDIA AI Data Platform - The Nutanix Cloud Platform solution builds on the NVIDIA AI Data Platform reference design and integrates the Nutanix Unified Storage and the Nutanix Database Service solutions for unstructured and structured data for AI. The Nutanix Cloud Infrastructure platform provides a private foundation for NVIDIA's accelerated computing, networking, and AI software to turn data into actionable intelligence. As an NVIDIA-Certified Enterprise Storage solution, Nutanix Unified Storage meets rigorous performance and scalability standards, providing software-defined enterprise storage for enterprise AI workloads, through capabilities such as NVIDIA GPUDirect Storage.
NAI is designed to use additional Nutanix platform services while allowing flexible deployments on HCI, bare metal, and cloud IaaS. NAI customers can also leverage the Nutanix Kubernetes Platform solution for multicloud fleet management of containerised cloud native applications, and Nutanix Unified Storage (NUS) and Nutanix Database Service (NDB) as discrete data services, offering a complete platform for agentic AI applications.
'Customers can realise the full potential of generative AI without sacrificing control, which is especially important as businesses expand into agentic capabilities,' said Scott Sinclair, Practice Director, ESG. "This expanded partnership with NVIDIA provides organisations an optimised solution for agentic AI minimising the risk of managing complex workflows while also safeguarding deployment through secure endpoint creation for APIs. AI initiatives are employed to deliver strategic advantages, but those advantages can't happen without optimised infrastructure control and security."
To learn more about how to get started with the latest NAI version and new NVIDIA capabilities, visit our latest blog post.
NAI with agentic model support is now generally available.
About Nutanix
Nutanix is a global leader in cloud software, offering organizations a single platform for running applications and managing data, anywhere. With Nutanix, companies can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of hyperconverged infrastructure, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively. Learn more at www.nutanix.com or follow us on social media @nutanix.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

NZ Herald
2 days ago
- NZ Herald
Anxious parents face tough choices on AI, from concern at what it might do to fear of their kids missing out
For Marc Watkins, a professor at the University of Mississippi who focuses on AI in teaching, 'we've already gone too far' to shield children from AI past a certain age. Yet some parents are still trying to remain gatekeepers to the technology. 'In my circle of friends and family, I'm the only one exploring AI with my child,' remarked Melissa Franklin, mother of a 7-year-old boy and a law student in Kentucky. 'I don't understand the technology behind AI,' she said, 'but I know it's inevitable, and I'd rather give my son a head start than leave him overwhelmed.' 'Benefits and risks' The path is all the more difficult for parents given the lack of scientific research on AI's effects on users. Several parents cite a study published in June by MIT, showing that brain activity and memory were more stimulated in individuals not using generative AI than in those who had access to it. 'I'm afraid it will become a shortcut,' explained a father-of-three who preferred to remain anonymous. 'After this MIT study, I want them to use it only to deepen their knowledge.' This caution shapes many parents' approaches. Tal prefers to wait before letting his sons use AI tools. Melissa Franklin only allows her son to use AI with her supervision to find information 'we can't find in a book, through Google, or on YouTube'. For her, children must be encouraged to 'think for themselves', with or without AI. But one father – a computer engineer with a 15-year-old – doesn't believe kids will learn AI skills from their parents anyway. 'That would be like claiming that kids learn how to use TikTok from their parents,' he said. It's usually 'the other way around'. Watkins, himself a father, says he is 'very concerned' about the new forms that generative AI is taking, but considers it necessary to read about the subject and 'have in-depth conversations about it with our children'. 'They're going to use artificial intelligence,' he said, 'so I want them to know the potential benefits and risks.' The chief executive of AI chip giant Nvidia, Jensen Huang, often speaks of AI as 'the greatest equalisation force that we have ever known', democratising learning and knowledge. But Watkins fears a different reality: 'Parents will view this as a technology that will be used if you can afford it, to get your kid ahead of everyone else'. The computer scientist father readily acknowledged this disparity, saying: 'My son has an advantage because he has two parents with PhDs in computer science'. 'But that's 90% due to the fact that we are more affluent than average' – not their AI knowledge. 'That does have some pretty big implications,' Watkins said. -Agence France-Presse


Techday NZ
6 days ago
- Techday NZ
DuploCloud AI Suite launches on AWS Marketplace to boost DevOps
DuploCloud has announced the availability of its AI Suite through the new AI Agents and Tools category in the AWS Marketplace. The launch enables AWS customers to discover, purchase, and deploy DuploCloud's Agentic Help Desk for DevOps, providing tools designed to accelerate the development and deployment of AI agents and workflow automation within AWS environments. Marketplace expansion The AI Agents and Tools category in AWS Marketplace functions as a unified catalogue for a variety of artificial intelligence solutions from AWS Partners. DuploCloud's inclusion in this catalogue means customers can streamline their procurement events, accessing AI solutions via their existing AWS accounts and infrastructure. According to DuploCloud, the AI Suite is designed to help organisations minimise manual DevOps tasks and automate cloud operations. The suite includes built-in security protocols to simplify DevOps management and scale cloud-based initiatives, offering customers the ability to focus resources on core software development rather than operational overheads. Agentic Help Desk and AI Studio AI Suite introduces several key components for customers. At the centre is the AI Help Desk, which leverages large language models (LLMs) alongside live infrastructure data. This tool converts user requests into executable actions, facilitating human approval and teamwork as part of agentic DevOps workflows. The suite also features an AI Studio that supports the creation and deployment of agents within containerised Kubernetes environments. These tools together allow users to automate even highly complex workflows based on written prompts. The platform executes these tasks securely, not merely suggesting actions, but taking steps towards their completion whilst retaining oversight and approval workflows for teams managing critical infrastructure. Streamlining procurement Participation in the AWS Marketplace's new AI category enables organisations to accelerate their procurement and deployment of AI technologies. By leveraging their AWS accounts, customers are able to maintain central oversight of software licenses, billing, and user permissions, reducing delays previously caused by extended vendor negotiations and multi-party evaluations for technology adoption. Venkat Thiruvengadam, Chief Executive Officer at DuploCloud, highlighted the company's aims for the partnership, stating: "We're excited to offer AI Suite in the new AWS Marketplace AI Agents and Tools Category. AWS Marketplace allows us to provide customers with a streamlined way to access our Agentic Help Desk for DevOps, helping them elevate DevOps from writing scripts to building agentic, cross-system workflows." Focus on automation and security The company describes its AI Suite as capable of supporting both human oversight and end-to-end automation, integrating compliance and security controls throughout its operations. It aims to empower teams to operate cloud infrastructure efficiently without the necessity of traditional Infrastructure-as-Code expertise. As the AI marketplace grows, automation and collaboration features are positioned as essential for enterprises seeking rapid development and compliance within cloud environments. Customers in the AWS ecosystem now have access to DuploCloud's tools without navigating new procurement processes or vendor relationships. By centralising AI DevOps workflow capabilities in the AWS Marketplace, organisations can adopt self-service models underpinned by automation, while maintaining existing security and governance standards. DuploCloud positions its offering for both startup and enterprise customers looking to launch products faster and scale DevOps practices without significantly increasing personnel or operational complexity. Follow us on: Share on:


Techday NZ
6 days ago
- Techday NZ
Zoho unveils Zia LLM & agent marketplace with focus on privacy
Zoho has introduced its proprietary large language model, Zia LLM, alongside new AI infrastructure investments designed to strengthen its business-focused artificial intelligence offerings and data privacy protections. The suite of new tools includes the Zia LLM, over 25 prebuilt AI-powered agents available in an Agent Marketplace, the no-code Zia Agent Studio for custom agent building, and a Model Context Protocol (MCP) server, providing third-party agents access to Zoho's extensive library of actions. These advancements are intended for both developers and end users, aiming to bring operational and financial efficiencies across a wide range of business needs and use cases. Focus on technology and privacy "Today's announcement emphasizes Zoho's longstanding aim to build foundational technology focused on protection of customer data, breadth and depth of capabilities, and value," said Mani Vembu, Chief Executive Officer at Zoho. "Because Zoho's AI initiatives are developed internally, we are able to provide customers with cutting-edge tool sets without compromising data privacy and organizational flexibility, democratizing the latest technology on a global scale." The Zia LLM was developed entirely in-house using NVIDIA's AI accelerated computing platform and has been trained for Zoho product-specific use cases such as structured data extraction, summarisation, retrieval-augmented generation (RAG), and code generation. The model is comprised of three parameter sizes - 1.3 billion, 2.6 billion, and 7 billion - each optimised for different business contexts and benchmarked against similar open-source models. Zoho says these models allow the platform to optimise performance according to user needs, balancing computational power with efficiency, and plans to continue evolving its right-sizing approach to AI model deployment. The Zia LLM will be rolled out across data centres in the United States, India, and Europe, initially being used to support internal Zoho applications and expected to become available for customer deployment in the coming months. Expansion in language and speech technology Alongside its language model, Zoho is launching proprietary Automatic Speech Recognition (ASR) models capable of performing speech-to-text conversion in both English and Hindi. These models operate with low computational requirements without a reduction in accuracy and, according to Zoho, can deliver up to 75% better performance than comparable models in standard benchmark tests. Additional language support is expected to follow, particularly for languages predominantly spoken in Europe and India. While many large language model integrations are supported on the Zoho platform, including ChatGPT, Llama, and DeepSeek, the company emphasises that Zia LLM enables customers to maintain their data on Zoho's servers, thus retaining control over privacy and security. Agentic AI technology To promote adoption of agentic AI, Zoho has made available a range of AI agents embedded within its core products. These agents are tailored to support various common organisational functions such as sales, customer service, and account management. The newly updated Ask Zia conversational assistant now features business intelligence skills suitable for data engineers, analysts, and data scientists, allowing them to build data pipelines, create analytical reports, and initiate machine learning processes within an interactive environment. A new Customer Service Agent has also been launched, capable of processing and contextualising customer requests, providing direct responses, or escalating queries to human staff as needed. Zia Agent Studio and Marketplace The Zia Agent Studio offers a fully prompt-based, optional low-code environment for building and deploying AI agents, giving users access to more than 700 predefined actions spanning the Zoho app ecosystem. Agents can be set for autonomous operation, triggered by user action, or integrated into customer communications. When deployed, these agents function as digital employees, adhering to existing organisational access permissions and allowing administrators to audit behaviour, performance, and impact. Zoho's Agent Marketplace, now part of its existing marketplace offering over 2,500 extensions, allows for rapid deployment of prebuilt and third-party AI agents. A selection of prebuilt agents is available, including: Revenue Growth Specialist, identifying opportunities for customer upsell and cross-sell Deal Analyser, which provides insights such as win probability and proposed follow-up actions for sales teams Candidate Screener, ranking job applicants based on skills and suitability Zoho has committed to regularly adding more prebuilt agents to address broader business needs and enabling ecosystem partners, independent software vendors, and developers to build and host additional agents. MCP interoperability and future roadmap The deployment of the Model Context Protocol server will permit any MCP client to access data and actions from a growing collection of Zoho applications within the customer's existing permission framework. Using Zoho Flow, certain third-party tools can also be accessed, and Zoho Analytics now includes support for a local MCP server. Expanded application support is planned throughout the year. Looking forward, Zoho intends to scale the Zia LLM's parameter sizes and extend speech-to-text capabilities to more languages. Plans also include the future release of a reasoning language model, further enhancements to Ask Zia's skills for finance and support teams, and the addition of protocol support for agent intercommunication within and beyond Zoho's platform. Zoho remains focused on balancing the ability to provide practical AI support for business needs with privacy requirements, stating that its models are not trained on consumer data and that no customer information is retained. The company reiterates its privacy pledge to customers, with complete oversight of data held in its own operated data centres.