
VDURA unveils HyperScaleFlow v11.2 to cut AI storage costs
The latest release delivers native Kubernetes Container Storage Interface (CSI) support, comprehensive end-to-end encryption, and the launch of VDURACare Premier, a support package that combines hardware, software, and maintenance under a single contract. A preview of the V-ScaleFlow capability is also included, which manages data movement between high-performance QLC flash and high-capacity hard drives to improve efficiency for AI-scale operations and reduce costs.
According to VDURA, native CSI support eases multi-tenant Kubernetes deployments by enabling persistent-volume provisioning and management without scripting. The new end-to-end encryption feature provides security for data from transfer through to storage, including tenant-specific encryption per volume.
VDURACare Premier offers comprehensive support through a single contract, covering hardware, software, and services such as a ten-year no-cost replacement policy for drives and 24-hour expert assistance.
The V-ScaleFlow technology, currently in preview within the software, introduces an optimised data management layer. It dynamically orchestrates placement and movement of data between QLC SSDs, such as the Pascari 128TB, and high-density hard drives exceeding 30TB each. This approach aims to reduce flash capacity requirements by more than 50 percent and cut power consumption, which the company says delivers significant cost savings for organisations building AI data pipelines.
The V-ScaleFlow system tackles industry challenges associated with write-intensive AI checkpoints and long-term data storage by using V-Burst to absorb demand spikes and write data sequentially to large NVMe drives, halving the amount of flash needed. For long-tail datasets and historic artefacts, the system moves data to high-capacity hard drives, which is intended to reduce both operational expenses and energy usage per petabyte stored.
The VDURACare Premier bundle addresses the complexity seen in contracts that separate hardware, software, and maintenance through a combined package with risk-free coverage across a decade.
Benefits highlighted for Version 11.2 and V-ScaleFlow include seamless data movement between flash and disks, optimised storage economics that can lower total cost of ownership by up to 60 percent, sub-millisecond latency for NVMe-class performance, and streamlined Kubernetes deployment for stateful AI workloads.
Ken Claffey, Chief Executive Officer of VDURA, said: "V11.2 delivers the speed, cloud-native simplicity, and security our customers expect - while V-ScaleFlow applies hyperscaler design principles, leveraging the same commodity SSDs and HDDs to enable efficient scaling and breakthrough economics."
VDURA stated that Data Platform V11.2 will become generally available on new V5000 systems during the third quarter of 2025. The full release of V-ScaleFlow is anticipated in the fourth quarter of 2025. Current V5000 users will have access to upgrade to Version 11.2 through an online update process.
VDURA is presenting the capabilities of its data platform at ISC 2025, alongside partners such as Phison Pascari, Seagate Mozaic, Starfish, and Cornelis Networks. The company is hosting the VDURA AI Data Challenge, an event featuring strongman Hafþór Björnsson, which will allow attendees to engage with interactive data tasks and evaluate GPU-optimised data performance.
Commenting on the technology, Michael Wu, President and General Manager of Phison U.S., said: "Phison has collaborated closely with VDURA to validate V-ScaleFlow technology, enabling seamless integration of our highest-capacity QLC Pascari enterprise SSDs in the VDURA Data Platform. V-Burst optimises write-intensive AI workloads, delivering exceptional performance and endurance while driving down costs - a game-changer for HPC and AI environments."
Trey Layton, Vice President of Software and Product Management for Advanced Computing at Penguin Solutions, added: "Penguin Solutions is excited to see VDURA's V11.2 release and breakthrough features that include V-ScaleFlow, native CSI support, and end-to-end encryption that advance the operational goals of our enterprise, federal, and cloud customers. These enhancements simplify persistent storage orchestration across Kubernetes environments, ensure robust security without performance tradeoffs, and unlock compelling TCO improvements for organisations scaling AI and HPC workloads. VDURA continues to deliver a platform purpose-built for the future of real-time, inference-driven infrastructure."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scoop
14 hours ago
- Scoop
Healthcare's GenAI Gold Rush Is Here, But The Infrastructure Isn't Ready
Modernisation of legacy IT systems remains critical to meeting data security, privacy, and scalability demands for healthcare organisations Nutanix (NASDAQ: NTNX), a leader in hybrid multicloud computing, announced the findings of its seventh annual global Healthcare Enterprise Cloud Index (ECI) survey and research report, which measures enterprise progress with cloud adoption in the industry. The research showed that 99% of healthcare organisations surveyed are currently leveraging GenAI applications or workloads today, more than any other industry. This includes a mix of applications from AI-powered chatbots to code co-pilots and clinical development automation. However, the overwhelming majority (96%) share that their current data security and governance measures are insufficient to fully support GenAI at scale. 'In healthcare, every decision we make has a direct impact on patient outcomes - including how we evolve our technology stack,' said Jon Edwards, Director IS Infrastructure Engineering at Legacy Health. 'We took a close look at how to integrate GenAI responsibly, and that meant investing in infrastructure that supports long-term innovation without compromising on data privacy or security. We're committed to modernising our systems to deliver better care, drive efficiency, and uphold the trust that patients place in us.' This year's report revealed that healthcare leaders are adopting GenAI at record rates while concerns remain. The number one issue flagged by healthcare leaders is the ability to integrate it with existing IT infrastructure (79%) followed closely by the fact that healthcare data silos still exist (65%), and development challenges with cloud native applications and containers (59%) are persistent. 'While healthcare has typically been slower to adopt new technologies, we've seen a significant uptick in the adoption of GenAI, much of this likely due to the ease of access to GenAI applications and tools,' said Scott Ragsdale, Senior Director, Sales - Healthcare & SLED at Nutanix. 'Even with such large adoption rates by organisations, there continue to be concerns given the importance of protecting healthcare data. Although all organisations surveyed are using GenAI in some capacity, we'll likely see more widespread adoption within those organisations as concerns around privacy and security are resolved.' Healthcare survey respondents were asked about GenAI adoptions and trends, Kubernetes and containers, how they're running business and mission critical applications today, and where they plan to run them in the future. Key findings from this year's report include: GenAI solution adoption and deployment across healthcare will necessitate a more comprehensive approach to data security. Healthcare respondents indicate a significant amount of work needs to be done to improve the foundational levels of data security/governance required to support GenAI solution implementation and success. The No. 1 challenge faced by healthcare organisations when it comes to leveraging or expanding utilisation of GenAI is privacy and security concerns of using large language models (LLMs) with sensitive company data. Furthermore, 96% of healthcare respondents agree that their organisation could be doing more to secure their GenAI models and applications. Improving data security and governance at the scale needed to support emerging GenAI workloads will be a long-term challenge and priority for many healthcare organisations. Prioritise infrastructure modernisation to support GenAI at scale across healthcare organisations. Running modern applications at enterprise scale requires infrastructure solutions that can support the necessary requirements for complex data security, data integrity and resilience. Unfortunately, 99% of healthcare respondents admit they face challenges when scaling GenAI workloads from development to production – with the No. 1 issue being integration with existing IT infrastructure. For this reason, we believe it is imperative that healthcare IT decision-makers prioritise infrastructure investments and modernisation as a key enabling component of GenAI initiatives. GenAI solution adoption in the healthcare sector continues at a rapid pace, but there are still challenges to overcome. When it comes to GenAI adoption, healthcare metrics are excellent, with 99% of industry respondents saying their organisation is leveraging GenAI applications/workloads today. Most healthcare organisations believe GenAI solutions will help improve levels of productivity, automation, and efficiency. Meanwhile, real-world GenAI use cases across healthcare segments gravitate towards GenAI-based customer support and experience solutions (e.g., chatbots), and code generation and code co-pilots. However, healthcare organisations also note a range of challenges and potential hindrances regarding GenAI solution development and deployment, including patient data security and privacy, scalability, and complexity. Application containerisation and Kubernetes® deployments are expanding across the healthcare industry. Container-based infrastructure and application development has the potential to allow organisations to deliver seamless, secure access to patient and business data across hybrid and multicloud environments. Application containerisation is pervasive across industry sectors and is set to expand in adoption across healthcare as well, with 99% of industry respondents saying their organisation is at least in the process of containerising trend may be driven by the fact that 92% of healthcare respondents agree their organisation benefits from adopting cloud native applications/containers. These findings suggest that the majority of IT decision-makers in healthcare will be considering how containerisation fits into expansion strategies for new and existing workloads. For the seventh consecutive year, Nutanix commissioned a global research study to learn about the state of global enterprise cloud deployments, application containerisation trends, and GenAI application adoption. In the Fall of 2024, U.K. researcher Vanson Bourne surveyed 1,500 IT and DevOps/Platform Engineering decision-makers around the world. The respondent base spanned multiple industries, business sizes, and geographies, including North and South America; Europe, the Middle East and Africa (EMEA); and Asia-Pacific-Japan (APJ) region. To learn more about the report and findings, please download the full Healthcare Nutanix Enterprise Cloud Index, here and read the blog here. About Nutanix Nutanix is a global leader in cloud software, offering organizations a single platform for running applications and managing data, anywhere. With Nutanix, companies can reduce complexity and simplify operations, freeing them to focus on their business outcomes. Building on its legacy as the pioneer of hyperconverged infrastructure, Nutanix is trusted by companies worldwide to power hybrid multicloud environments consistently, simply, and cost-effectively. Learn more at or follow us on social media @nutanix.


Techday NZ
19-06-2025
- Techday NZ
Mirantis unveils architecture to speed & secure AI deployment
Mirantis has released a comprehensive reference architecture to support IT infrastructure for AI workloads, aiming to assist enterprises in deploying AI systems quickly and securely. The Mirantis AI Factory Reference Architecture is based on the company's k0rdent AI platform and designed to offer a composable, scalable, and secure environment for artificial intelligence and machine learning (ML) workloads. According to Mirantis, the solution provides criteria for building, operating, and optimising AI and ML infrastructure at scale, and can be operational within days of hardware installation. The architecture leverages templated and declarative approaches provided by k0rdent AI, which Mirantis claims enables rapid provisioning of required resources. This, the company states, leads to accelerated prototyping, model iteration, and deployment—thereby shortening the overall AI development cycle. The platform features curated integrations, accessible via the k0rdent Catalog, for various AI and ML tools, observability frameworks, continuous integration and delivery, and security, all while adhering to open standards. Mirantis is positioning the reference architecture as a response to rising demand for specialised compute resources, such as GPUs and CPUs, crucial for the execution of complex AI models. "We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads," said Shaun O'Meara, chief technology officer, Mirantis. "This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure." The architecture addresses several high-performance computing challenges, including Remote Direct Memory Access (RDMA) networking, GPU allocation and slicing, advanced scheduling, performance tuning, and Kubernetes scaling. Additionally, it supports integration with multiple AI platform services, such as Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. In contrast to typical cloud-native workloads, which are optimised for scale-out and multi-core environments, AI tasks often require the aggregation of multiple GPU servers into a single high-performance computing instance. This shift demands RDMA and ultra-high-performance networking, areas which the Mirantis reference architecture is designed to accommodate. The reference architecture uses Kubernetes and is adaptable to various AI workload types, including training, fine-tuning, and inference, across a range of environments. These include dedicated or shared servers, virtualised settings using KubeVirt or OpenStack, public cloud, hybrid or multi-cloud configurations, and edge locations. The solution addresses the specific needs of AI workloads, such as high-performance storage and high-speed networking technologies, including Ethernet, Infiniband, NVLink, NVSwitch, and CXL, to manage the movement of large data sets inherent to AI applications. Mirantis has identified and aimed to resolve several challenges in AI infrastructure, such as: Time-intensive fine-tuning and configuration compared to traditional compute systems; Support for hard multi-tenancy to ensure security, isolation, resource allocation, and contention management; Maintaining data sovereignty for data-driven AI and ML workloads, particularly where models contain proprietary information; Ensuring compliance with varied regional and regulatory standards; Managing distributed, large-scale infrastructure, which is common in edge deployments; Effective resource sharing, particularly of high-demand compute components such as GPUs; Enabling accessibility for users such as data scientists and developers who may not have specific IT infrastructure expertise. The composable nature of the Mirantis AI Factory Reference Architecture allows users to assemble infrastructure using reusable templates across compute, storage, GPU, and networking components, which can then be tailored to specific AI use cases. The architecture includes support for a variety of hardware accelerators, including products from NVIDIA, AMD, and Intel. Mirantis reports that its AI Factory Reference Architecture has been developed with the goal of supporting the unique operational requirements of enterprises seeking scalable, sovereign AI infrastructures, especially where control over data and regulatory compliance are paramount. The framework is intended as a guideline to streamline the deployment and ongoing management of these environments, offering modularity and integration with open standard tools and platforms.


Techday NZ
10-06-2025
- Techday NZ
iFLYTEK wins CNCF award for AI model training with Volcano
iFLYTEK has been named the winner of the Cloud Native Computing Foundation's End User Case Study Contest for advancements in scalable artificial intelligence infrastructure using the Volcano project. The selection recognises iFLYTEK's deployment of Volcano to address operational inefficiencies and resource management issues that arose as the company expanded its AI workloads. iFLYTEK, which specialises in speech and language artificial intelligence, reported experiencing underutilised GPUs, increasingly complex workflows, and competition among teams for resources as its computing demands expanded. These problems resulted in slower development progress and placed additional strain on infrastructure assets. With the implementation of Volcano, iFLYTEK introduced elastic scheduling, directed acyclic graph (DAG)-based workflows, and multi-tenant isolation into its AI model training operations. This transition allowed the business to improve the efficiency of its infrastructure and simplify the management of large-scale training projects. Key operational improvements cited include a significant increase in resource utilisation and reductions in system disruptions. DongJiang, Senior Platform Architect at iFLYTEK, said, "Before Volcano, coordinating training under large-scale GPU clusters across teams meant constant firefighting, from resource bottlenecks and job failures to debugging tangled training pipelines. Volcano gave us the flexibility and control to scale AI training reliably and efficiently. We're honoured to have our work recognized by CNCF, and we're excited to share our journey with the broader community at KubeCon + CloudNativeCon China." Volcano is a cloud native batch system built on Kubernetes and is designed to support performance-focused workloads such as artificial intelligence and machine learning training, big data processing, and scientific computing. The platform's features include job orchestration, resource fairness, and queue management, intended to maximise the efficient management of distributed workloads. Volcano was first accepted into the CNCF Sandbox in 2020 and achieved Incubating maturity level by 2022, reflecting increasing adoption for compute-intensive operations. iFLYTEK's engineering team cited the need for an infrastructure that could adapt to the rising scale and complexity of AI model training. Their objectives were to improve allocation of computing resources, manage multi-stage workflows efficiently, and limit disruptions to jobs while ensuring equitable resource access among multiple internal teams. The adoption of Volcano yielded several measurable outcomes for iFLYTEK's AI infrastructure. The company reported a 40% increase in GPU utilisation, contributing to lower infrastructure costs and reduced idle periods. Additionally, the company experienced a 70% faster recovery rate from training job failures, which contributed to more consistent and uninterrupted AI development. The speed of hyperparameter searches—a process integral to AI model optimisation—was accelerated by 50%, allowing the company's teams to test and refine models more swiftly. Chris Aniszczyk, Chief Technology Officer at CNCF, said, "iFLYTEK's case study shows how open source can solve complex, high-stakes challenges at scale. By using Volcano to boost GPU efficiency and streamline training workflows, they've cut costs, sped up development, and built a more reliable AI platform on top of Kubernetes, which is essential for any organization striving to lead in AI." As artificial intelligence workloads become increasingly complex and reliant on large-scale compute resources, the use of tools like Volcano has expanded among organisations seeking more effective operational strategies. iFLYTEK's experience with the platform will be the subject of a presentation at KubeCon + CloudNativeCon China, where company representatives will outline approaches to managing distributed model training within Kubernetes-based environments. iFLYTEK will present its case study, titled "Scaling Large Model Training in Kubernetes Clusters with Volcano," sharing technical and practical insights with participants seeking to optimise large-scale artificial intelligence training infrastructure.