logo
#

Latest news with #ShaunO'Meara

Mirantis unveils architecture to speed & secure AI deployment
Mirantis unveils architecture to speed & secure AI deployment

Techday NZ

time19-06-2025

  • Business
  • Techday NZ

Mirantis unveils architecture to speed & secure AI deployment

Mirantis has released a comprehensive reference architecture to support IT infrastructure for AI workloads, aiming to assist enterprises in deploying AI systems quickly and securely. The Mirantis AI Factory Reference Architecture is based on the company's k0rdent AI platform and designed to offer a composable, scalable, and secure environment for artificial intelligence and machine learning (ML) workloads. According to Mirantis, the solution provides criteria for building, operating, and optimising AI and ML infrastructure at scale, and can be operational within days of hardware installation. The architecture leverages templated and declarative approaches provided by k0rdent AI, which Mirantis claims enables rapid provisioning of required resources. This, the company states, leads to accelerated prototyping, model iteration, and deployment—thereby shortening the overall AI development cycle. The platform features curated integrations, accessible via the k0rdent Catalog, for various AI and ML tools, observability frameworks, continuous integration and delivery, and security, all while adhering to open standards. Mirantis is positioning the reference architecture as a response to rising demand for specialised compute resources, such as GPUs and CPUs, crucial for the execution of complex AI models. "We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads," said Shaun O'Meara, chief technology officer, Mirantis. "This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure." The architecture addresses several high-performance computing challenges, including Remote Direct Memory Access (RDMA) networking, GPU allocation and slicing, advanced scheduling, performance tuning, and Kubernetes scaling. Additionally, it supports integration with multiple AI platform services, such as Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. In contrast to typical cloud-native workloads, which are optimised for scale-out and multi-core environments, AI tasks often require the aggregation of multiple GPU servers into a single high-performance computing instance. This shift demands RDMA and ultra-high-performance networking, areas which the Mirantis reference architecture is designed to accommodate. The reference architecture uses Kubernetes and is adaptable to various AI workload types, including training, fine-tuning, and inference, across a range of environments. These include dedicated or shared servers, virtualised settings using KubeVirt or OpenStack, public cloud, hybrid or multi-cloud configurations, and edge locations. The solution addresses the specific needs of AI workloads, such as high-performance storage and high-speed networking technologies, including Ethernet, Infiniband, NVLink, NVSwitch, and CXL, to manage the movement of large data sets inherent to AI applications. Mirantis has identified and aimed to resolve several challenges in AI infrastructure, such as: Time-intensive fine-tuning and configuration compared to traditional compute systems; Support for hard multi-tenancy to ensure security, isolation, resource allocation, and contention management; Maintaining data sovereignty for data-driven AI and ML workloads, particularly where models contain proprietary information; Ensuring compliance with varied regional and regulatory standards; Managing distributed, large-scale infrastructure, which is common in edge deployments; Effective resource sharing, particularly of high-demand compute components such as GPUs; Enabling accessibility for users such as data scientists and developers who may not have specific IT infrastructure expertise. The composable nature of the Mirantis AI Factory Reference Architecture allows users to assemble infrastructure using reusable templates across compute, storage, GPU, and networking components, which can then be tailored to specific AI use cases. The architecture includes support for a variety of hardware accelerators, including products from NVIDIA, AMD, and Intel. Mirantis reports that its AI Factory Reference Architecture has been developed with the goal of supporting the unique operational requirements of enterprises seeking scalable, sovereign AI infrastructures, especially where control over data and regulatory compliance are paramount. The framework is intended as a guideline to streamline the deployment and ongoing management of these environments, offering modularity and integration with open standard tools and platforms.

Build, Operate and Optimize AI and ML Infrastructure at Scale with Industry's First Reference Architecture to Support AI Workloads
Build, Operate and Optimize AI and ML Infrastructure at Scale with Industry's First Reference Architecture to Support AI Workloads

Business Wire

time17-06-2025

  • Business
  • Business Wire

Build, Operate and Optimize AI and ML Infrastructure at Scale with Industry's First Reference Architecture to Support AI Workloads

CAMPBELL, Calif.--(BUSINESS WIRE)-- Mirantis, the Kubernetes-native AI infrastructure company enabling enterprises to build and operate scalable, secure, and sovereign AI infrastructure across any environment, today announced the industry's first comprehensive reference architecture for IT infrastructure to support AI workloads. The Mirantis AI Factory Reference Architecture, built on Mirantis k0rdent AI, provides a secure, composable, scalable, and sovereign platform for building, operating, and optimizing AI and ML infrastructure at scale. Share The Mirantis AI Factory Reference Architecture, built on Mirantis k0rdent AI, provides a secure, composable, scalable, and sovereign platform for building, operating, and optimizing AI and ML infrastructure at scale. It enables: AI workloads to be deployed within days of hardware installation using k0rdent AI's templated, declarative model for rapid provisioning; Faster prototyping, iteration, and deployment of models and services to dramatically shorten the AI development lifecycle; Curated integrations (via the k0rdent Catalog) for AI/ML tools, observability, CI/CD, security, and more, which leverage open standards. 'We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads,' said Shaun O'Meara, chief technology officer, Mirantis. 'This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure.' With the reference architecture, Mirantis addresses complex issues related to high-performance computing that include remote direct memory access (RDMA) networking, GPU allocation and slicing, sophisticated scheduling requirements, performance tuning, and Kubernetes scaling. The architecture can also integrate a choice of AI Platform Services, including Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. Cloud native workloads, which are typically designed for scale-out and multi-core operations, are quite different from AI workloads, that can require turning many GPU-based servers into one single supercomputer with aggregated memory that requires RDMA and ultra-high performance networking. The reference architecture leverages Kubernetes and supports multiple AI workload types (training, fine-tuning, inference) across: dedicated or shared servers; virtualized environments (KubeVirt/OpenStack); public cloud or hybrid/multi-cloud; and edge locations. It addresses the novel challenges related to provisioning, configuration, and maintenance of AI infrastructure and supporting the unique needs of workloads, including high-performance storage, and ultra-high-speed networking (Ethernet, Infiniband, NVLink, NVSwitch, CXL) to keep up with AI data movement needs. They include: Fine-tuning and configuration, which typically take longer to implement and learn than traditional compute systems; Hard multi-tenancy for data security and isolation, resource allocation, and contention management; Data sovereignty of AI and ML workloads that are typically data-driven or contain unique intellectual property in their models, which makes it critical to control how and where this data is used; Compliance with regional and regulatory requirements; Managing scale and sprawl because the infrastructure used for AI and ML is typically comprised of a large number of compute systems that can be highly distributed for edge workloads; Resource sharing of GPUs and other vital compute resources that are scarce and expensive and thus must be shared effectively and/or leveraged wherever they are available; Skills availability because many AI and ML projects are run by data scientists or developers who are not specialists in IT infrastructure. The Mirantis AI Factory Reference Architecture is designed to be composable so that users can assemble infrastructure from reusable templates across compute, storage, GPU, and networking layers tailored to their specific AI workload needs. It includes support for NVIDIA, AMD, and Intel AI accelerators. Access the complete reference architecture document, along with more information. About Mirantis Mirantis is the Kubernetes-native AI infrastructure company, enabling organizations to build and operate scalable, secure, and sovereign infrastructure for modern AI, machine learning, and data-intensive applications. By combining open source innovation with deep expertise in Kubernetes orchestration, Mirantis empowers platform engineering teams to deliver composable, production-ready developer platforms across any environment - on-premises, in the cloud, at the edge, or in data centers. As enterprises navigate the growing complexity of AI-driven workloads, Mirantis delivers the automation, GPU orchestration, and policy-driven control needed to cost-effectively manage infrastructure with confidence and agility. Committed to open standards and freedom from lock-in, Mirantis ensures that customers retain full control of their infrastructure strategy. Mirantis serves many of the world's leading enterprises, including Adobe, Ericsson, Inmarsat, PayPal, and Societe Generale. Learn more at

Mirantis k0rdent unifies AI, VM & container workloads at scale
Mirantis k0rdent unifies AI, VM & container workloads at scale

Techday NZ

time30-05-2025

  • Business
  • Techday NZ

Mirantis k0rdent unifies AI, VM & container workloads at scale

Mirantis has released updates to its k0rdent platform, introducing unified management capabilities for both containerised and virtual machine (VM) workloads aimed at supporting high-performance AI pipelines, modern microservices, and legacy applications. The new k0rdent Enterprise and k0rdent Virtualization offerings utilise a Kubernetes-native model to unify the management of AI, containerised, and VM-based workloads. By providing a single control plane, Mirantis aims to simplify operational complexity and reduce the need for multiple siloed tools when handling diverse workload requirements. k0rdent's unified infrastructure management allows organisations to manage AI services, containers, and VM workloads seamlessly within one environment. The platform leverages Kubernetes orchestration to automate the provisioning, scaling, and recovery of both containers and VMs, helping deliver consistent performance at scale. The platform also offers improved resource utilisation by automating the scheduling of computing and storage resources for various workloads through dynamic allocation. According to the company, this optimisation contributes to more efficient operations and cost control across modern and traditional application environments. Organisations can benefit from faster deployment cycles as k0rdent provides declarative infrastructure and self-service templates for containers and VMs. These features are designed to reduce delays typically associated with provisioning and deployment, allowing teams to accelerate time-to-value for projects. Enhanced portability and flexibility form a key part of the platform's approach. Workloads, including AI applications and microservices, can run alongside traditional VM-based applications on public cloud, private data centres, or hybrid infrastructure, without requiring refactoring. This capability aims to support a wide range of operational strategies and application modernisation efforts. Shaun O'Meara, Chief Technology Officer at Mirantis, stated, "Organisations are navigating a complex mix of legacy systems and emerging AI demands. k0rdent Enterprise and k0rdent Virtualization are delivering a seamless path to unified, Kubernetes-native AI infrastructure, enabling faster deployment, easier compliance, and reduced risk across any public, private, hybrid, or edge environment." With the new updates, platform engineers can define, deploy, and operate Kubernetes-based infrastructure using declarative automation, GitOps workflows, and validated templates from the Mirantis ecosystem. The solution is built on k0s, an open source CNCF Sandbox Kubernetes distribution, which Mirantis says enables streamlined infrastructure management and supports digital transformation initiatives across enterprises. k0rdent Virtualization, which operates on Mirantis k0rdent Enterprise, is positioned as an alternative to VMware tools such as vSphere, ESXi, and vRealize. This is intended to facilitate enterprises seeking to modernise application portfolios or expand edge computing infrastructure, including the integration of AI and cloud-native workloads, while retaining support for legacy infrastructure. The platform supports distributed workloads running across a variety of environments. It enables platform engineering teams to manage Kubernetes clusters at scale, build tailored internal developer platforms, and maintain compliance and operational consistency. k0rdent offers composable features through declarative automation, centralised policy enforcement, and deployment templates that can be used with Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), vSphere, and OpenStack. Mirantis provides k0rdent Enterprise and k0rdent Virtualization directly and via channel partners to meet the needs of organisations managing distributed and AI-driven workloads.

Mirantis k0rdent Accelerates AI Infrastructure Adoption With Unified Cloud Native and Virtualized Workload Management
Mirantis k0rdent Accelerates AI Infrastructure Adoption With Unified Cloud Native and Virtualized Workload Management

Business Wire

time28-05-2025

  • Business
  • Business Wire

Mirantis k0rdent Accelerates AI Infrastructure Adoption With Unified Cloud Native and Virtualized Workload Management

CAMPBELL, Calif.--(BUSINESS WIRE)-- Mirantis, providing organizations with total control over their strategic infrastructure using open source software, today announced Mirantis k0rdent Enterprise and Mirantis k0rdent Virtualization, unifying infrastructure for AI, containerized, and VM-based workloads through a Kubernetes-native model, streamlining operations for high-performance AI pipelines, modern microservices, and legacy applications alike. 'k0rdent Enterprise and k0rdent Virtualization deliver a seamless path to unified, Kubernetes-native AI infrastructure, enabling faster deployment, easier compliance, and reduced risk across any public, private, hybrid, or edge environment.' k0rdent bridges the gap between containerized and virtualized infrastructure with: Unified Infrastructure Management - Seamlessly manage AI services, modern containers, and VM-based workloads under one Kubernetes-native control plane, reducing complexity and siloed tooling. Scalability and Automation - Leverages Kubernetes' orchestration capabilities to automate containers and VM provisioning, scaling, and recovery for consistent performance at scale. Improved Resource Utilization - Optimizes compute and storage usage across workloads through automated scheduling and dynamic resource allocation. Faster Time-to-Value - Accelerates deployment cycles and reduces provisioning delays with declarative infrastructure and self-service container and VM templates. Enhanced Portability and Flexibility - Runs AI applications and modern microservices alongside traditional VM-based applications on any cloud or on-premises infrastructure without having to refactor applications. "Organizations are navigating a complex mix of legacy systems and emerging AI demands,' said Shaun O'Meara, chief technology officer, Mirantis. 'k0rdent Enterprise and k0rdent Virtualization are delivering a seamless path to unified, Kubernetes-native AI infrastructure, enabling faster deployment, easier compliance, and reduced risk across any public, private, hybrid, or edge environment.' With k0rdent, platform engineers can define, deploy, and operate consistent, policy-enforced Kubernetes infrastructure using declarative automation, GitOps workflows, and validated templates from the Mirantis ecosystem. Leveraging the self-contained open source k0s, CNCF Sandbox Kubernetes distribution, k0rdent simplifies infrastructure management and accelerates digital transformation initiatives. k0rdent Virtualization runs on Mirantis k0rdent Enterprise as an alternative for VMware products vSphere, ESXi, and vRealize. By supporting legacy infrastructure, k0rdent Virtualization is particularly suited for enterprises modernizing application portfolios and expanding edge infrastructure with AI and cloud native workloads. k0rdent is designed to support modern distributed workloads across any infrastructure. It enables platform engineering teams to manage Kubernetes clusters at scale, create customized internal developer platforms (IDPs), and accelerate innovation while maintaining compliance and operational consistency. k0rdent delivers composable, Kubernetes-native capabilities through declarative automation, centralized policy enforcement, and production-ready deployment templates for environments including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), vSphere, and OpenStack. k0rdent Enterprise and k0rdent Virtualization are available directly from Mirantis or via channel partners. About Mirantis Mirantis helps organizations simplify operations, reduce complexity, and accelerate innovation by providing open source solutions for delivering and managing modern distributed applications at scale. The company enables platform engineering teams to build and operate secure, scalable, and customizable developer platforms across any environment—on-premises, public cloud, hybrid, or edge. As AI-driven workloads become a core component of modern architectures, Mirantis provides the automation, multi-cloud orchestration, and infrastructure flexibility required to support high-performance AI, machine learning, and data-intensive applications. Committed to open standards and avoiding vendor lock-in, Mirantis empowers organizations to deploy and operate infrastructure and services on their terms. Mirantis serves many of the world's leading enterprises, including Adobe, Ericsson, Inmarsat, PayPal, and Societe Generale. Learn more at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store