logo
F5 Delivers Scalable And Secure Cloud-Native Network Functionality For AI And High-Bandwidth Applications

F5 Delivers Scalable And Secure Cloud-Native Network Functionality For AI And High-Bandwidth Applications

Scoop21-05-2025

Press Release – F 5
F5 has unveiled F5 BIG-IP Next Cloud-Native Network Functions (CNF) 2.0, an evolved solution that significantly enhances the capabilities of the F5 Application Delivery and Security Platform (ADSP) for large-scale cloud-native applications.
F5 (NASDAQ: FFIV), the global leader in delivering and securing every app and API, today unveiled F5 BIG-IP Next Cloud-Native Network Functions (CNF) 2.0, an evolved solution that significantly enhances the capabilities of the F5 Application Delivery and Security Platform (ADSP) for large-scale cloud-native applications. With advanced Kubernetes-native features, F5 BIG-IP Next CNF 2.0 redefines how organisations adapt to increasingly complex and resource-intensive operations caused by high-bandwidth applications such as AI by delivering scalable, resource-efficient, and secure network functionality for telecommunications service providers, internet service providers (ISPs), cloud service providers, and large enterprises.
Designed to support diverse industries—from telecommunications to cloud services—F5 BIG-IP Next CNF 2.0 helps organisations revolutionise high-bandwidth operations. Service providers can cut costs with more efficient resource allocation and scaling, mitigate modern security threats, and simplify management through Kubernetes-native automation. By integrating essential services such as DDoS protection, firewall, intrusion prevention system (IPS), and carrier-grade NAT (CGNAT), F5 BIG-IP Next CNF 2.0 empowers providers to consolidate network operations, safeguard infrastructure, and proactively scale amidst increasing traffic demands.
'Service providers and large enterprises are under pressure to scale faster, operate leaner, and stay secure—all in increasingly complex environments,' said Kunal Anand, Chief Innovation Officer at F5. 'With BIG-IP Next CNF 2.0, we're extending the F5 ADSP with a truly cloud-native solution built for modern, decentralised infrastructure. Unlike legacy virtualised approaches that burn resources, our Kubernetes-native architecture unlocks smarter scaling, stronger security, and more efficient delivery of high-bandwidth services—giving customers the flexibility to move faster without compromise.'
Raising the Bar for Cloud-Native Network Functions
Telecommunications and enterprise networks face an urgent need to balance escalating traffic volumes, tight budgets, and growing security threats—all within complex, distributed architectures. F5 BIG-IP Next CNF 2.0 directly addresses these challenges with tools that consolidate network functions, reduce resource consumption, and optimise scalability and security. Highlights of F5 BIG-IP Next CNF 2.0 include:
Disaggregation (DAG): Enables horizontal scalability for traffic steering and resource optimisation.
Accelerated DNS: Offers faster query responses and reduced latency via caching and secure zone transfers.
Policy Enforcer: Integrates traffic optimisation features like video acceleration, URL filtering, and context-aware controls.
Unified Security Services: Combines firewall, DDoS mitigation, IPS, and CGNAT for centralised management and robust protection.
Platform Enhancements: Maximises flexibility with Kubernetes-native automation and separate scaling of control and data planes.
Optimised for Large Networks Across Industries
F5 BIG-IP Next CNF 2.0 helps telecommunications providers supercharge their 4G and 5G environments with advanced traffic steering and enhanced security tailored for N6/SGi-LAN architectures. ISPs benefit from capabilities like CGNAT to mitigate IPv4 shortages while boosting performance through system disaggregation. Cloud service providers gain the edge with scalable global server load balancing (GSLB) and AI-ready DNS features, ensuring seamless digital experiences. Enterprises, on the other hand, can power IT and SecOps teams with intelligent traffic optimisation, robust DDoS defences, and simplified policy enforcement for bandwidth-intensive applications, reinforcing their operational agility and security posture.
With 33 per cent lower CPU utilisation, F5 BIG-IP Next CNF 2.0 reduces operational costs and optimises resource consumption. The solution's independent scalability—allowing separate data and control plane scaling—ensures flexibility without bottlenecks, while its edge-ready and power-efficient architecture guarantees low latency and superior user experiences. Integrated security measures protect against large-scale network attacks, and Kubernetes-native automation streamlines workflows with API-driven deployments for faster, simplified operations.
F5 BIG-IP Next CNF 2.0 consolidates services to reduce infrastructure costs by over 60 per cent. Disaggregation enables seamless scalability across CNF instances, while DNS acceleration minimises latency for end users. Advanced traffic optimisation ensures smooth performance during peak demand, empowering service providers to excel in high-bandwidth applications.
F5 BIG-IP Next CNF 2.0 + Red Hat OpenShift
This week at Red Hat Summit 2025, F5 is unveiling BIG-IP Next CNF 2.0 functionality on Red Hat OpenShift. BIG-IP Next CNF 2.0 is designed to work more seamlessly with Red Hat OpenShift, the industry's leading hybrid cloud application platform powered by Kubernetes. Red Hat OpenShift delivers a critical foundation for service providers to more effectively deploy scalable cloud-native applications on a trusted, more consistent platform. By combining Red Hat OpenShift's robust Kubernetes management capabilities with F5 BIG-IP Next CNF 2.0's powerful network functions, service providers can scale their applications more efficiently while unlocking additional value, including advanced traffic handling, optimised security, and simplified usability. Many service providers already rely on Red Hat OpenShift for modern cloud-native operations.
Visit www.f5.com to learn more about how F5 enables transformational cloud-native operations for interconnected networks.
About F5
F5, Inc. (NASDAQ: FFIV) is the global leader that delivers and secures every app. Backed by three decades of expertise, F5 has built the industry's premier platform—F5 Application Delivery and Security Platform (ADSP)—to deliver and secure every app, every API, anywhere: on-premises, in the cloud, at the edge, and across hybrid, multicloud environments. F5 is committed to innovating and partnering with the world's largest and most advanced organisations to deliver fast, available, and secure digital experiences. Together, we help each other thrive and bring a better digital world to life.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Snyk acquires Invariant Labs to boost AI-native app security
Snyk acquires Invariant Labs to boost AI-native app security

Techday NZ

time3 days ago

  • Techday NZ

Snyk acquires Invariant Labs to boost AI-native app security

Snyk has announced the acquisition of Invariant Labs, a move set to expand its AI security capabilities and address the increasing security demands of AI-native and agentic applications. Invariant Labs, known for its work in shaping security standards for agentic AI, will now become part of Snyk, integrating its research and technologies with Snyk's recently launched AI Trust Platform. The acquisition marks Snyk's twelfth to date and brings with it a new research and development function, Snyk Labs, to advance security for emerging AI risks. AI security integration Peter McKay, Chief Executive Officer at Snyk, commented on the impact of the acquisition: "This acquisition is an important integration into Snyk's recently launched AI Trust Platform that adds the ability to secure applications from emergent threats. Snyk can now offer customers a single platform to address both current application and agentic AI vulnerabilities." According to Snyk, the technologies and approaches developed by Invariant Labs will be absorbed into Snyk Labs, concentrating efforts on research regarding AI security, especially in relation to large language models (LLMs), autonomous agents, and multi-component protocol (MCP) systems. Snyk Labs will serve as the company's new research arm, delivering capabilities through its AI Trust Platform by focusing on threats such as tool poisoning and MCP rug pulls. With the rapid growth of AI-native software in enterprise settings, security teams are increasingly confronted with new and unfamiliar threats. Snyk's acquisition of Invariant Labs aims to provide consolidated tools and intelligence, equipping customers to manage risks associated with agent-based systems in real-time production environments. Responding to evolving risks Snyk emphasised that the integration will allow security professionals to secure not only established applications, but also the emerging generation of AI-native and agentic software that is seeing widespread adoption. This dual focus is intended to support companies dealing with risks such as unauthorised data exfiltration, agent actions beyond the intended scope, and MCP vulnerabilities. At the forefront of research on new AI risks, Invariant Labs has played a key role in identifying and naming novel attack types, including terms like "tool poisoning" and "MCP rug pulls," which are already being observed in live deployments. "With Invariant Labs, we're accelerating our ability to identify, prioritize, and neutralize the next generation of Agentic AI threats before they reach production," said Manoj Nair, Chief Innovation Officer at Snyk. "This acquisition also underscores Snyk's proactive commitment to supporting security teams navigating the urgent and unfamiliar risks of AI-native software, which is rapidly becoming the new software development default." Technology and research Invariant Labs is known for developing Guardrails, a transparent security layer for LLMs and AI agents. Guardrails enables developers to implement security controls, observe system behaviours in context, and enforce policies based on a combination of static and runtime data, human review, and incident logs. These features are designed to help developers scan for vulnerabilities and monitor agent compliance with security standards. Marc Fischer, PhD, Chief Executive Officer and co-founder of Invariant Labs, commented on the direction of the merged teams: "We've spent years researching and building the frameworks necessary to secure the AI-native future. We must understand that agent-based AI systems are a powerful new class of software, especially autonomous ones, and demand greater oversight and stronger security guarantees than traditional approaches. We're excited to join the Snyk team, as this mindset is deeply aligned with their mission." The collaboration is expected to further embed Invariant Labs' research-driven approach into Snyk's product offerings, supporting organisations with real-time defences against current and emerging AI threats. As AI adoption continues to rise, this acquisition highlights steps being taken within the cybersecurity sector to address vulnerabilities inherent to autonomous, agent-based, and AI-native systems already in use across industry.

Mirantis unveils architecture to speed & secure AI deployment
Mirantis unveils architecture to speed & secure AI deployment

Techday NZ

time19-06-2025

  • Techday NZ

Mirantis unveils architecture to speed & secure AI deployment

Mirantis has released a comprehensive reference architecture to support IT infrastructure for AI workloads, aiming to assist enterprises in deploying AI systems quickly and securely. The Mirantis AI Factory Reference Architecture is based on the company's k0rdent AI platform and designed to offer a composable, scalable, and secure environment for artificial intelligence and machine learning (ML) workloads. According to Mirantis, the solution provides criteria for building, operating, and optimising AI and ML infrastructure at scale, and can be operational within days of hardware installation. The architecture leverages templated and declarative approaches provided by k0rdent AI, which Mirantis claims enables rapid provisioning of required resources. This, the company states, leads to accelerated prototyping, model iteration, and deployment—thereby shortening the overall AI development cycle. The platform features curated integrations, accessible via the k0rdent Catalog, for various AI and ML tools, observability frameworks, continuous integration and delivery, and security, all while adhering to open standards. Mirantis is positioning the reference architecture as a response to rising demand for specialised compute resources, such as GPUs and CPUs, crucial for the execution of complex AI models. "We've built and shared the reference architecture to help enterprises and service providers efficiently deploy and manage large-scale multi-tenant sovereign infrastructure solutions for AI and ML workloads," said Shaun O'Meara, chief technology officer, Mirantis. "This is in response to the significant increase in the need for specialized resources (GPU and CPU) to run AI models while providing a good user experience for developers and data scientists who don't want to learn infrastructure." The architecture addresses several high-performance computing challenges, including Remote Direct Memory Access (RDMA) networking, GPU allocation and slicing, advanced scheduling, performance tuning, and Kubernetes scaling. Additionally, it supports integration with multiple AI platform services, such as Gcore Everywhere Inference and the NVIDIA AI Enterprise software ecosystem. In contrast to typical cloud-native workloads, which are optimised for scale-out and multi-core environments, AI tasks often require the aggregation of multiple GPU servers into a single high-performance computing instance. This shift demands RDMA and ultra-high-performance networking, areas which the Mirantis reference architecture is designed to accommodate. The reference architecture uses Kubernetes and is adaptable to various AI workload types, including training, fine-tuning, and inference, across a range of environments. These include dedicated or shared servers, virtualised settings using KubeVirt or OpenStack, public cloud, hybrid or multi-cloud configurations, and edge locations. The solution addresses the specific needs of AI workloads, such as high-performance storage and high-speed networking technologies, including Ethernet, Infiniband, NVLink, NVSwitch, and CXL, to manage the movement of large data sets inherent to AI applications. Mirantis has identified and aimed to resolve several challenges in AI infrastructure, such as: Time-intensive fine-tuning and configuration compared to traditional compute systems; Support for hard multi-tenancy to ensure security, isolation, resource allocation, and contention management; Maintaining data sovereignty for data-driven AI and ML workloads, particularly where models contain proprietary information; Ensuring compliance with varied regional and regulatory standards; Managing distributed, large-scale infrastructure, which is common in edge deployments; Effective resource sharing, particularly of high-demand compute components such as GPUs; Enabling accessibility for users such as data scientists and developers who may not have specific IT infrastructure expertise. The composable nature of the Mirantis AI Factory Reference Architecture allows users to assemble infrastructure using reusable templates across compute, storage, GPU, and networking components, which can then be tailored to specific AI use cases. The architecture includes support for a variety of hardware accelerators, including products from NVIDIA, AMD, and Intel. Mirantis reports that its AI Factory Reference Architecture has been developed with the goal of supporting the unique operational requirements of enterprises seeking scalable, sovereign AI infrastructures, especially where control over data and regulatory compliance are paramount. The framework is intended as a guideline to streamline the deployment and ongoing management of these environments, offering modularity and integration with open standard tools and platforms.

iFLYTEK wins CNCF award for AI model training with Volcano
iFLYTEK wins CNCF award for AI model training with Volcano

Techday NZ

time10-06-2025

  • Techday NZ

iFLYTEK wins CNCF award for AI model training with Volcano

iFLYTEK has been named the winner of the Cloud Native Computing Foundation's End User Case Study Contest for advancements in scalable artificial intelligence infrastructure using the Volcano project. The selection recognises iFLYTEK's deployment of Volcano to address operational inefficiencies and resource management issues that arose as the company expanded its AI workloads. iFLYTEK, which specialises in speech and language artificial intelligence, reported experiencing underutilised GPUs, increasingly complex workflows, and competition among teams for resources as its computing demands expanded. These problems resulted in slower development progress and placed additional strain on infrastructure assets. With the implementation of Volcano, iFLYTEK introduced elastic scheduling, directed acyclic graph (DAG)-based workflows, and multi-tenant isolation into its AI model training operations. This transition allowed the business to improve the efficiency of its infrastructure and simplify the management of large-scale training projects. Key operational improvements cited include a significant increase in resource utilisation and reductions in system disruptions. DongJiang, Senior Platform Architect at iFLYTEK, said, "Before Volcano, coordinating training under large-scale GPU clusters across teams meant constant firefighting, from resource bottlenecks and job failures to debugging tangled training pipelines. Volcano gave us the flexibility and control to scale AI training reliably and efficiently. We're honoured to have our work recognized by CNCF, and we're excited to share our journey with the broader community at KubeCon + CloudNativeCon China." Volcano is a cloud native batch system built on Kubernetes and is designed to support performance-focused workloads such as artificial intelligence and machine learning training, big data processing, and scientific computing. The platform's features include job orchestration, resource fairness, and queue management, intended to maximise the efficient management of distributed workloads. Volcano was first accepted into the CNCF Sandbox in 2020 and achieved Incubating maturity level by 2022, reflecting increasing adoption for compute-intensive operations. iFLYTEK's engineering team cited the need for an infrastructure that could adapt to the rising scale and complexity of AI model training. Their objectives were to improve allocation of computing resources, manage multi-stage workflows efficiently, and limit disruptions to jobs while ensuring equitable resource access among multiple internal teams. The adoption of Volcano yielded several measurable outcomes for iFLYTEK's AI infrastructure. The company reported a 40% increase in GPU utilisation, contributing to lower infrastructure costs and reduced idle periods. Additionally, the company experienced a 70% faster recovery rate from training job failures, which contributed to more consistent and uninterrupted AI development. The speed of hyperparameter searches—a process integral to AI model optimisation—was accelerated by 50%, allowing the company's teams to test and refine models more swiftly. Chris Aniszczyk, Chief Technology Officer at CNCF, said, "iFLYTEK's case study shows how open source can solve complex, high-stakes challenges at scale. By using Volcano to boost GPU efficiency and streamline training workflows, they've cut costs, sped up development, and built a more reliable AI platform on top of Kubernetes, which is essential for any organization striving to lead in AI." As artificial intelligence workloads become increasingly complex and reliant on large-scale compute resources, the use of tools like Volcano has expanded among organisations seeking more effective operational strategies. iFLYTEK's experience with the platform will be the subject of a presentation at KubeCon + CloudNativeCon China, where company representatives will outline approaches to managing distributed model training within Kubernetes-based environments. iFLYTEK will present its case study, titled "Scaling Large Model Training in Kubernetes Clusters with Volcano," sharing technical and practical insights with participants seeking to optimise large-scale artificial intelligence training infrastructure.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store