
How AI is driving the convergence of networking and security
The impact of AI cannot be overestimated. It is acting like a new kind of gravity, pulling together the previously distinct worlds of networking and security, while reshaping traffic flows across every network layer. We have become accustomed to the changes that AI is bringing about within the data center, and the overheads it is placing on data center infrastructure. But it is now apparent that its influence is being felt much more widely, reshaping requirements across the entire ICT landscape, from the WAN down to the campus.
AI is turning security on its head, believes Mauricio Sanchez, Senior Director of Enterprise Security and Networking Research with independent analyst firm Dell'Oro Group: "AI has been transforming the hacker community, enabling even mediocre players to become super hackers," he believes. "The positive aspects of AI are being turned on their head and used for evil."
Add in the perennial demand of enterprises for five nines of uptime, and a cogent argument is emerging for a convergence of networking and security: "Network operations and security operations can simply no longer exist in parallel silos," notes Sanchez.
Enterprises are all at different stages on this journey, working out how to unite AI with converged networking and security at their own pace. Some are simply applying AI to legacy network and security operations. "They are finding, as they try to move on from this point, that they are getting into what I would call AI-strained infrastructure," says Sanchez. "This is where administrators are sweating bullets, not really empowered or able to react appropriately."
Some progressive companies are in a better place than this when it comes to embracing AI and bringing it to bear on both networking and security, according to Dell'Oro's findings, moving towards what Sanchez describes as the AI augmented-network stage. "Beyond that is a level where both infrastructural change and operational change lead to an AI-empowered network that is being run cleanly and efficiently, able to use AI and serve AI applications well," he concludes. "That is the objective over the next five years for many enterprises."
New challenges, new solutions
To further explore these themes, Sanchez was invited by Unified SASE as a Service provider Aryaka to join a 'fireside chat'. Along with Sanchez was Renuka Nadkarni, Chief Product Officer, Aryaka and Kevin Deierling SVP Marketing, Networking, NVIDIA.
All three agree that AI is changing cybersecurity in many different ways. "AI is using a ton of data, and that creates some opacity," observes NVIDIA's Deierling. "It's hard to see what's happening when AIs are talking to other Ais, and that creates new challenges."
Agility and flexibility are essential responses: "The amount of data that's being created by AI is massive, and the networking performance needed is incredible," he says. "We're shipping 400 gigabit per second networks today, moving to 800 gigabits per second, with 1.6 terabits right around the corner. You can't just statically create a set of rules and hope for the best. It's about being dynamic and responsive in the face of all these new challenges."
Nadkarni of Aryaka believes it all comes down to an age old problem, that of reconciling performance with security: "Back in the day, you had separate networking and security teams making separate decisions," she points out. "The security people were often getting in the way of the business, with frequent conflict between the two. Now our customers are migrating heavily towards a converged networking and security play. And it's not easy, because the whole industry has been divided into networking vendors and security vendors. The whole unified SASE as a service that we are trying to bring to the table was architected from the start to bring things together."
An added challenge, according to Nadkarni, is that AI introduces a certain amount of undeterministic behaviour, both on the networking and the security side: "Customer network architecture and network design used to be about a point to point link," she says. "It was deterministic, because people would typically buy from service providers in increments of 10 Mbps or 100 Mbps, defined as between offices and data centers. But now users are everywhere. Applications are hosted in public clouds, accessed via SaaS. We're seeing a lot of AI applications coming in as SaaS. Traffic patterns have changed drastically, but the need for security is something that hasn't changed."
AI has certainly spelled the end for the static workloads of yesterday where it was easy for networking and security managers to keep tabs on what is happening. In an era of agentic AI workflows, where AIs are talking to Ais, and then interacting with humans, the pace has picked up and complexity can be overwhelming, often at the expense of security.
Deierling says that to assist here, NVIDIA has developed a platform called Morpheus that characterizes behaviours: "It characterizes devices as well as people, and we developed a digital fingerprint using AI to do that," he explains. "We stream the data in real time to these powerful AI engines that can detect anomalous behaviours. If suddenly a human being is firing passwords at the speed of a computer, we can detect that in real time and actually isolate that traffic. We accelerate things with our networking hardware, and we stream telemetry data so we can perform AI very quickly. And then we provide those solutions to partners to build something that customers can use."
Nadkarni says Morpheus works well with Aryaka's platform: "In most security implementations, you take a subset of your traffic and share it with a security vendor," she says. "But if you already knew what traffic needed to be processed you wouldn't need to do that. Because of Morpheus, we have the ability to process all traffic through our system, and we don't have to make choices."
She invites a comparison between the fast-evolving AI we see today and the recent emergence of DevSecOps: "It touches so many aspects of a customer's activities. Many of our customers are telling us they are creating an AI adoption team. We advise them to break down the problem into smaller pieces. Identify all the stakeholders who are accountable for it. For example, who owns the data? Security of data is really important when it comes to AI."
AI is here, it's massive and it's going to transform every industry, concludes Deierling: "Every enterprise should focus on their core expertise, and use AI to accelerate that," he advises. "AI is fast, and it uses huge amounts of data. It's a different type of challenge than what we've seen, but I agree with Renuka that it's an evolution of DevSecOps. Call it AISecOps. You need to protect models, you need to protect data, you need to protect users."
Nadkarni believes we are in 'very exciting times': "We've already seen adoption of cloud, and of different SaaS applications," she says. "AI will have a bigger impact than that. But as Kevin was saying, enterprises should focus on the most important things for their business and then leverage the latest AI technologies that are available, as well as make sure that their network is modernized. It's the industry's job to focus on providing the best technology, offering the best solutions, making it easier to adopt AI and go through the changes that are coming. It's a privilege to be around at this time, seeing all the benefits of this new technology as they unfold."
Sanchez from Dell'Oro Group concludes with the advice that all enterprises go back to basics, focussing on things like visibility: "In order to do that, they need to have the right infrastructure in place, the right foundational elements, because you can't build a strong house on weak foundations," he claims. "Don't try to figure this out for yourself. There are smart people, from companies like NVIDIA and Aryaka, that can help on this journey to make sure that that you don't stumble."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
2 hours ago
- Techday NZ
Skills AI-driven shops want to see in developers
Architectural and system design thinking (problem-solving and critical thinking) As AI becomes more capable of generating code, developers should be both skilled code writers and strategic architects who focus on upfront design and system-level thinking. System architecture skills have become significantly more valuable because AI tools require proper structure, context, and guidance to generate quality code that delivers business value. Effective AI interaction, the critical validation of AI-generated outputs, and the debugging of AI-specific error patterns necessitate strong, continuously updated technical and coding foundations. Senior engineers now spend their time defining how systems connect to subsystems, establishing business logic, and building high-context environments for AI tools. Developers become orchestrators of the code, versus only the writers of the code—doing analysis and planning on the front end, then reviewing outputs to ensure they don't create technical debt. Well-engineered prompts mirror systems architecture documentation, containing clear functionality statements, domain expertise, and explicit constraints that produce predictable AI outputs. AI communication and context management (communication and collaboration) Working effectively with AI requires sophisticated communication skills that dramatically influence output quality. Developers must become proficient in the art of framing problems, providing appropriate context, and structuring interactions with AI systems. This skill becomes critical as teams transition from using AI tools to orchestrating complex AI-driven workflows across the development lifecycle. Modern prompt engineering focuses on designing process-oriented thinking that guides AI through complex tasks by defining clear goals, establishing constraints, and creating effective interaction rules. Developers must understand how to provide sufficient context without overwhelming AI systems and learn to iterate on feedback across multiple cycles. As AI agents increasingly participate in software development, teams must architect these interactions strategically, breaking complex problems into manageable chunks and building contextual workflows that align with business objectives. Ensuring quality & security (adaptability and continuous learning) As AI takes a more proactive role in software development, companies should develop specialised QA processes tailored to the unique error patterns and risks of AI-generated code. This should include validating AI reasoning processes, employing adversarial testing for both prompts and code, leveraging formal methods for critical components where appropriate, and implementing advanced, defense-in-depth prompt security measures. Organisations are responding by implementing "prompt security" practices to prevent injection attacks and establishing specialised review processes for AI-generated code. They're creating adversarial testing frameworks that deliberately challenge AI outputs with unusual inputs while maintaining human oversight at critical decision points. This represents a fundamental evolution from traditional debugging approaches to validating AI reasoning processes and ensuring business logic alignment—a necessary adaptation as AI becomes more autonomous in software development workflows. Follow us on: Share on:


Techday NZ
2 hours ago
- Techday NZ
GenAI adoption surges in healthcare but security hurdles remain
Ninety-nine percent of healthcare organisations are now making use of generative artificial intelligence (GenAI), according to new global research from Nutanix, but almost all say they face challenges in data security and scaling these technologies to production. The findings are drawn from the seventh annual Healthcare Enterprise Cloud Index (ECI) report by Nutanix, which surveyed 1,500 IT and engineering decision-makers across multiple industries and regions, including the healthcare sector. The research highlights both rapid uptake of GenAI in healthcare settings and significant ongoing barriers around infrastructure and privacy. GenAI use widespread, but risks loom Among healthcare organisations surveyed, a striking 99% said they are currently leveraging GenAI applications or workloads, such as AI-powered chatbots, code co-pilots and tools for clinical development automation. This sector now leads all other industries in GenAI adoption, the report found. However, nearly as many respondents—96%—admitted their existing data security and governance were not robust enough to support GenAI at scale. Additionally, 99% say scaling from pilot or development to production remains a serious challenge, with integration into existing IT systems cited as the most significant barrier to wider deployment. "In healthcare, every decision we make has a direct impact on patient outcomes - including how we evolve our technology stack," said Jon Edwards, Director IS Infrastructure Engineering at Legacy Health. "We took a close look at how to integrate GenAI responsibly, and that meant investing in infrastructure that supports long-term innovation without compromising on data privacy or security. We're committed to modernising our systems to deliver better care, drive efficiency, and uphold the trust that patients place in us." Patient data privacy and security concerns underpin much of this hesitation. The number one challenge flagged by healthcare leaders is the task of integrating GenAI with legacy IT infrastructure (79%), followed by the continued existence of data silos (65%) and ongoing obstacles in developing cloud-native applications and containers (59%). Infrastructure modernisation lags adoption The report stresses that while GenAI uptake is high, inadequate IT modernisation could impede progress. Scaling modern applications such as GenAI requires updated infrastructure solutions capable of handling complex data security, integrity, and resilience demands. Respondents overwhelmingly agree more must be done in this area. Key findings also indicate that improving foundational data security and governance will remain an ongoing priority. Ninety-six percent agree their organisations could still improve the security of their GenAI models and applications, while fears around using large language models (LLMs)—especially with sensitive healthcare data—are prevalent. Scott Ragsdale, Senior Director, Sales - Healthcare & SLED at Nutanix, described the recent surge in GenAI adoption as a departure from healthcare's traditional technology adoption timeline. "While healthcare has typically been slower to adopt new technologies, we've seen a significant uptick in the adoption of GenAI, much of this likely due to the ease of access to GenAI applications and tools. Even with such large adoption rates by organisations, there continue to be concerns given the importance of protecting healthcare data. Although all organisations surveyed are using GenAI in some capacity, we'll likely see more widespread adoption within those organisations as concerns around privacy and security are resolved." Nearly all healthcare respondents (99%) acknowledge difficulties in moving GenAI workloads to production, driven chiefly by the challenge of integrating with existing systems. This indicates that, despite wide experimentation and early deployments, many organisations remain cautious about full-scale rollouts. Containers and cloud-native trends In addition to GenAI, the survey found a rapid expansion in the use of application containerisation and Kubernetes deployments across healthcare. Ninety-nine percent of respondents said they are at least in the process of containerising applications, and 92% note distinct benefits from cloud-native application adoption, such as improved agility and security. Container-based infrastructure is viewed as crucial for enabling secure, seamless access to both patient and business data over hybrid and multicloud environments. As a result, many healthcare IT decision-makers are expected to prioritise modern deployment strategies involving containers for both new and existing workloads. Respondents continue to see GenAI as a path towards improved productivity, automation and efficiency, with major use cases involving customer support chatbots, experience solutions, and code generation tools. Yet, the sector remains grappling with the challenges of scale, security, and complexity inherent to these new technologies. The Nutanix study was conducted by Vanson Bourne in Autumn 2024 and included perspectives from across the Americas, EMEA and Asia-Pacific-Japan.


Techday NZ
3 hours ago
- Techday NZ
Cloudflare makes AI crawlers opt-in, giving power to creators
Cloudflare has introduced a default setting to block AI crawlers from accessing web content without explicit permission, making it the first internet infrastructure provider to take this step. With this new measure, website owners using Cloudflare's services will have the choice to allow or block AI crawlers, moving from a previous opt-out system to an opt-in approach. This change is designed to address issues concerning the unauthorised scraping and usage of web content by AI companies for purposes such as training and inference, often without the knowledge or compensation of the content creators. Permission-based controls Under the new system, AI companies are now required to disclose the purpose of their crawlers, specifying whether they are used for training, inference, or search. This allows website owners to make more informed decisions about which bots may access their data. Cloudflare is also developing a "Pay Per Crawl" feature that will give content creators the ability to request payment from AI companies for access to their content, which could generate new revenue streams for publishers. Cloudflare's Chief Executive Officer and Co-founder, Matthew Prince, stated: "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone – creators, consumers, tomorrow's AI founders, and the future of the web itself. Original content is what makes the Internet one of the greatest inventions in the last century, and it's essential that creators continue making it. AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate. This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone." This revised approach follows previous Cloudflare initiatives to block AI crawlers, which began with a one-click option introduced in September 2024. Since then, more than one million customers have chosen to restrict AI crawlers from their websites. Now, blocking occurs by default for all new customers, eliminating the need for domain owners to adjust settings to prevent unauthorised crawling. Support from publishers Prominent media organisations and publishers have expressed support for Cloudflare's move, including ADWEEK, SkyNews, Fortune, The Associated Press, BuzzFeed, The Atlantic, TIME, Reddit, and Pinterest. These companies have advocated for fair compensation frameworks and greater transparency around how content is accessed and used by AI platforms. Roger Lynch, Chief Executive Officer of Condé Nast, commented: "Cloudflare's innovative approach to block AI crawlers is a game-changer for publishers and sets a new standard for how content is respected online. When AI companies can no longer take anything they want for free, it opens the door to sustainable innovation built on permission and partnership. This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable." Neil Vogel, Chief Executive Officer of Dotdash Meredith, added: "We have long said that AI platforms must fairly compensate publishers and creators to use our content. We can now limit access to our content to those AI partners willing to engage in fair arrangements. We're proud to support Cloudflare and look forward to using their tools to protect our content and the open web." Renn Turiano, Chief Consumer and Product Officer at Gannett Media, also noted: "As the largest publisher in the country, comprised of USA TODAY and over 200 local publications throughout the USA TODAY Network, blocking unauthorised scraping and the use of our original content without fair compensation is critically important. As our industry faces these challenges, we are optimistic the Cloudflare technology will help combat the theft of valuable IP." Bill Ready, Chief Executive Officer of Pinterest, said: "Creators and publishers around the world leverage Pinterest to expand their businesses, reach new audiences and directly measure their success. As AI continues to reshape the digital landscape, we are committed to building a healthy Internet infrastructure where content is used for its intended purpose, so creators and publishers can thrive." Steve Huffman, Co-founder and Chief Executive Officer of Reddit, stated: "AI companies, search engines, researchers, and anyone else crawling sites have to be who they say they are. And any platform on the web should have a say in who is taking their content for what. The whole ecosystem of creators, platforms, web users and crawlers will be better when crawling is more transparent and controlled, and Cloudflare's efforts are a step in the right direction for everyone." Vivek Shah, Chief Executive Officer of Ziff Davis, commented: "We applaud Cloudflare for advocating for a sustainable digital ecosystem that benefits all stakeholders — the consumers who rely on credible information, the publishers who invest in its creation, and the advertisers who support its dissemination." Industry consortia and authentication Cloudflare is also participating in the development of new technical protocols to allow AI bots to authenticate themselves and for website owners to reliably determine the identity and intent of incoming requests. This aims to improve overall transparency and control over the use of web content by automated agents. Additional media and technology companies have added their support, indicating a broad industry move towards permission-based AI access to digital content. The list includes companies such as The Arena Group, Atlas Obscura, Quora, Stack Overflow, Universal Music Group, O'Reilly Media, and others. This change comes as publishers report reduced website traffic and declining advertising revenues linked to AI platforms generating answers directly to user queries without referencing or referring traffic to the original sources. Cloudflare's new default blocking of AI crawlers aims to restore a value exchange between content creators, consumers, and technology companies as artificial intelligence continues to shape the internet landscape.