
AI deployment creates new cybersecurity risks, warns report
The report details a range of security challenges faced by organisations as they deploy AI technologies, including vulnerabilities in key components, accidental internet exposure, weaknesses in open-source software, and issues with container-based systems.
Critical vulnerabilities
The research identifies vulnerabilities and exploits in vital parts of AI infrastructure. Many AI applications rely on a blend of specialised software, some of which are susceptible to the same flaws as traditional software. The report notes the discovery of zero-day vulnerabilities in components such as ChromaDB, Redis, NVIDIA Triton, and NVIDIA Container Toolkit, posing significant risks if left unpatched.
In addition to these, the report draws attention to the exposure of servers hosting AI infrastructure to the public internet, often as a result of rapid deployment and inadequate security measures. According to Trend Micro, more than 200 ChromaDB servers, 2,000 Redis servers, and over 10,000 Ollama servers have been found exposed without authentication, leaving them open to malicious probing.
Open-source and container concerns
The reliance on open-source components in AI frameworks is another focus for security risks. Vulnerabilities may go unnoticed when they are integrated into production systems, as demonstrated at the recent Pwn2Own Berlin event. Researchers there identified an exploit in the Redis vector database, attributed to an outdated Lua component.
Continuing the theme of infrastructure risk, the report discusses the widespread use of containers in AI deployments. Containers, while commonly used to improve efficiency, are vulnerable to the same security issues that plague broader cloud and container environments. Pwn2Own researchers also discovered an exploit targeting the NVIDIA Container Toolkit, raising concerns about container management practices in the deployment of AI technologies.
Expert perspectives AI may represent the opportunity of the century for ANZ businesses. But those rushing in too fast without taking adequate security precautions may end up causing more harm than good. As our report reveals, too much global AI infrastructure is already being built from unsecured and/or unpatched components, creating an open door for threat actors.
This statement from Mick McCluney, Field CTO for ANZ at Trend Micro, underscores the importance of balancing innovation in AI with a robust approach to cybersecurity.
Stuart MacLellan, Chief Technology Officer at NHS SLAM, also shared perspectives on the organisational implications of these findings: There are still lots of questions around AI models and how they could and should be used. We now get much more information now than we ever did about the visibility of devices and what applications are being used. It's interesting to collate that data and get dynamic, risk-based alerts on people and what they're doing depending on policies and processes. That's going to really empower the decisions that are made organisationally around certain products.
Recommended actions
The report sets out several practical steps organisations can take to mitigate risk. These include enhanced patch management, regular vulnerability scanning, maintaining a comprehensive inventory of all software components, and adopting best practices for container management. The report also advises that configuration checks should be undertaken to ensure that critical AI infrastructure is not inadvertently exposed to the internet.
The findings highlight the need for the developer community and users of AI to better balance security with speed to market. Trend Micro recommends that organisations exercise due diligence, particularly as the adoption of AI continues to rise across various sectors.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
an hour ago
- RNZ News
Dire need for AI support in primary, intermediate schools survey shows
A NZ Council for Education Research survey of teachers and students found that there was "a dire need" for guidance on best practice for AI in schools. Photo: UnSplash/ Taylor Flowe Primary school children say using AI sometimes feels like cheating and teachers warn their "Luddite" colleagues are "freaking out" about the technology. The insights come from an NZ Council for Education Research survey that warns primary and intermediate schools need urgent support for using Artificial Intelligence in the classroom. The council said its survey of 266 teachers and 147 pupils showed "a dire need" for guidance on best practice. It found teachers were experimenting with generative AI tools such as ChatGPT for tasks like lesson planning and personalising learning materials to match children's interests and skills, and many of their students were using it too though generally at home rather than in the classroom. But the survey of teachers and also found most primary schools did not have AI policies. "Teachers often don't have the appropriate training, they are often using the free models that are more prone to error and bias, and there is a dire need for guidance on best practice for using AI in the primary classroom," report author David Coblentz said. Coblentz said schools needed national guidance and students needed lessons in critical literacy so they understood the tools they were using and their in-built biases. He said in the meantime schools could immediately improve the quality of AI use and teacher and student privacy by avoiding free AI tools and using more reliable AI. The report said most of the teachers who responded to the survey said they had noted mistakes in AI-generated information. Most believed less than a third of their pupils, or none at all, were using AI for learning but 66 percent were worried their students might become too reliant on the technology. Most of the mostly Year 7-8 students surveyed in four schools had heard of AI, and less than half said they had never used it. Those who did use AI mostly did so outside of school. "Between one-eighth and one-half of users at each school said they asked AI to answer questions "for school or fun" (12%-50%). Checking or fixing writing attracted moderate proportions everywhere (29%-45%). Smaller proportions used AI for idea generation on projects or homework (6%-32%) and for gaming assistance (12%-41%). Talking to AI "like a friend" showed wide variation, from one in eight (12%) at Case A to nearly half (47%) at the all-girls' Case D," the survey report said. Across the four schools, between 55 and 72 percent agreed "Using AI sometimes feels like cheating" and between 38 and 74 percent agreed "Using AI too much can make it hard for kids to learn on their own". Roughly a quarter said they were better at using AI tools than the grown-ups they knew. Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

RNZ News
2 hours ago
- RNZ News
Tech Tuesday with Tim Batt: YouTube using AI to check users age
technology media 35 minutes ago Tech giant Google is taking action to verify how old YouTube users are. It says it will use AI to estimate the age of users and then show them age appropriate content. The move comes after Australia confirmed it would ban children under 16 from using YouTube. Tim shares details with Emile.

RNZ News
2 hours ago
- RNZ News
Report shows need for policy around AI use in primary schools
technology education about 1 hour ago A new report shows an urgent need for policy around the use of AI in primary schools. Report author David Coblentz spoke to Charlotte Cook.