logo
GenAI adoption surges in healthcare but security hurdles remain

GenAI adoption surges in healthcare but security hurdles remain

Techday NZ14 hours ago
Ninety-nine percent of healthcare organisations are now making use of generative artificial intelligence (GenAI), according to new global research from Nutanix, but almost all say they face challenges in data security and scaling these technologies to production.
The findings are drawn from the seventh annual Healthcare Enterprise Cloud Index (ECI) report by Nutanix, which surveyed 1,500 IT and engineering decision-makers across multiple industries and regions, including the healthcare sector. The research highlights both rapid uptake of GenAI in healthcare settings and significant ongoing barriers around infrastructure and privacy.
GenAI use widespread, but risks loom
Among healthcare organisations surveyed, a striking 99% said they are currently leveraging GenAI applications or workloads, such as AI-powered chatbots, code co-pilots and tools for clinical development automation. This sector now leads all other industries in GenAI adoption, the report found.
However, nearly as many respondents—96%—admitted their existing data security and governance were not robust enough to support GenAI at scale. Additionally, 99% say scaling from pilot or development to production remains a serious challenge, with integration into existing IT systems cited as the most significant barrier to wider deployment. "In healthcare, every decision we make has a direct impact on patient outcomes - including how we evolve our technology stack," said Jon Edwards, Director IS Infrastructure Engineering at Legacy Health. "We took a close look at how to integrate GenAI responsibly, and that meant investing in infrastructure that supports long-term innovation without compromising on data privacy or security. We're committed to modernising our systems to deliver better care, drive efficiency, and uphold the trust that patients place in us."
Patient data privacy and security concerns underpin much of this hesitation. The number one challenge flagged by healthcare leaders is the task of integrating GenAI with legacy IT infrastructure (79%), followed by the continued existence of data silos (65%) and ongoing obstacles in developing cloud-native applications and containers (59%).
Infrastructure modernisation lags adoption
The report stresses that while GenAI uptake is high, inadequate IT modernisation could impede progress. Scaling modern applications such as GenAI requires updated infrastructure solutions capable of handling complex data security, integrity, and resilience demands. Respondents overwhelmingly agree more must be done in this area.
Key findings also indicate that improving foundational data security and governance will remain an ongoing priority. Ninety-six percent agree their organisations could still improve the security of their GenAI models and applications, while fears around using large language models (LLMs)—especially with sensitive healthcare data—are prevalent.
Scott Ragsdale, Senior Director, Sales - Healthcare & SLED at Nutanix, described the recent surge in GenAI adoption as a departure from healthcare's traditional technology adoption timeline. "While healthcare has typically been slower to adopt new technologies, we've seen a significant uptick in the adoption of GenAI, much of this likely due to the ease of access to GenAI applications and tools. Even with such large adoption rates by organisations, there continue to be concerns given the importance of protecting healthcare data. Although all organisations surveyed are using GenAI in some capacity, we'll likely see more widespread adoption within those organisations as concerns around privacy and security are resolved."
Nearly all healthcare respondents (99%) acknowledge difficulties in moving GenAI workloads to production, driven chiefly by the challenge of integrating with existing systems. This indicates that, despite wide experimentation and early deployments, many organisations remain cautious about full-scale rollouts.
Containers and cloud-native trends
In addition to GenAI, the survey found a rapid expansion in the use of application containerisation and Kubernetes deployments across healthcare. Ninety-nine percent of respondents said they are at least in the process of containerising applications, and 92% note distinct benefits from cloud-native application adoption, such as improved agility and security.
Container-based infrastructure is viewed as crucial for enabling secure, seamless access to both patient and business data over hybrid and multicloud environments. As a result, many healthcare IT decision-makers are expected to prioritise modern deployment strategies involving containers for both new and existing workloads.
Respondents continue to see GenAI as a path towards improved productivity, automation and efficiency, with major use cases involving customer support chatbots, experience solutions, and code generation tools. Yet, the sector remains grappling with the challenges of scale, security, and complexity inherent to these new technologies.
The Nutanix study was conducted by Vanson Bourne in Autumn 2024 and included perspectives from across the Americas, EMEA and Asia-Pacific-Japan.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Cybercriminals use GenAI, v0.dev to launch advanced phishing
Cybercriminals use GenAI, v0.dev to launch advanced phishing

Techday NZ

time4 hours ago

  • Techday NZ

Cybercriminals use GenAI, v0.dev to launch advanced phishing

Research from Okta Threat Intelligence has found that cybercriminals are leveraging Generative Artificial Intelligence (GenAI), specifically the tool from Vercel, to manufacture sophisticated phishing websites swiftly and at scale. Okta's researchers have observed threat actors utilising the platform to create convincing replicas of sign-in pages for a range of prominent brands. According to the team's findings, attackers can build a functional phishing site by inputting a short text prompt, thereby substantially reducing the technical barrier for launching attacks. New methods The research revealed that which is intended to help developers create web interfaces through natural language instructions, is also allowing adversaries to quickly reproduce the design and branding of authentic login sites. In one case, Okta noted that the login page of one of its own customers had been imitated using this AI-powered software. Phishing sites created using often also hosted visual assets such as company logos on Vercel's own infrastructure. Okta Threat Intelligence explained that consolidating these resources on a trusted platform is a deliberate technique by attackers. By doing so, they aim to avoid typical detection methods that monitor for assets served from known malicious or unrelated infrastructures. Vercel responded to these findings by restricting access to the suspect sites and working with Okta to improve reporting processes for additional phishing-related infrastructure. The observed activity confirms that today's threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities. The use of a platform like Vercel's allows emerging threat actors to rapidly produce high-quality, deceptive phishing pages, increasing the speed and scale of their operations. Wider proliferation The report also noted the existence of several public GitHub repositories that replicate the application, along with DIY guides enabling others to build their own generative phishing tools. According to Okta, this widespread availability is making advanced phishing tactics accessible to a broader cohort of cybercriminals, effectively democratising the creation of fraudulent web infrastructure. Further monitoring revealed that attackers have used the Vercel platform to host phishing sites imitating not just Okta customers, but also brands like Microsoft 365 and various cryptocurrency companies. Security advisories related to these findings have been made available to Okta's customers. Implications for security Okta Threat Intelligence underlined that this represents a significant change in the phishing threat landscape, given the increasingly realistic appearance of sites generated by artificial intelligence. The group stressed that safeguarding systems using traditional indicators of poor quality or imperfect design is now insufficient for deterrence. Organizations can no longer rely on teaching users how to identify suspicious phishing sites based on imperfect imitation of legitimate services. The only reliable defence is to cryptographically bind a user's authenticator to the legitimate site they enrolled in. This is the technique that powers Okta FastPass, the passwordless method built into Okta Verify. When phishing resistance is enforced in policy, the authenticator will not allow the user to sign into any resource but the origin (domain) established during enrollment. Put simply, the user cannot be tricked into handing over their credentials to a phishing site. To address these risks, Okta Threat Intelligence has recommended several mitigation strategies. These include enforcing phishing-resistant authentication policies and prioritising the deactivation of less secure factors, restricting access to trusted devices, requiring secondary authentication if anomalous user behaviour is detected, and updating security awareness training to account for AI-driven threats. The research reflects the rapid operationalisation of machine learning tools in malicious campaigns, and highlights the need for continuous adaptation by organisations and their cybersecurity teams in response to evolving threats. Follow us on: Share on:

Skills AI-driven shops want to see in developers
Skills AI-driven shops want to see in developers

Techday NZ

time14 hours ago

  • Techday NZ

Skills AI-driven shops want to see in developers

Architectural and system design thinking (problem-solving and critical thinking) As AI becomes more capable of generating code, developers should be both skilled code writers and strategic architects who focus on upfront design and system-level thinking. System architecture skills have become significantly more valuable because AI tools require proper structure, context, and guidance to generate quality code that delivers business value. Effective AI interaction, the critical validation of AI-generated outputs, and the debugging of AI-specific error patterns necessitate strong, continuously updated technical and coding foundations. Senior engineers now spend their time defining how systems connect to subsystems, establishing business logic, and building high-context environments for AI tools. Developers become orchestrators of the code, versus only the writers of the code—doing analysis and planning on the front end, then reviewing outputs to ensure they don't create technical debt. Well-engineered prompts mirror systems architecture documentation, containing clear functionality statements, domain expertise, and explicit constraints that produce predictable AI outputs. AI communication and context management (communication and collaboration) Working effectively with AI requires sophisticated communication skills that dramatically influence output quality. Developers must become proficient in the art of framing problems, providing appropriate context, and structuring interactions with AI systems. This skill becomes critical as teams transition from using AI tools to orchestrating complex AI-driven workflows across the development lifecycle. Modern prompt engineering focuses on designing process-oriented thinking that guides AI through complex tasks by defining clear goals, establishing constraints, and creating effective interaction rules. Developers must understand how to provide sufficient context without overwhelming AI systems and learn to iterate on feedback across multiple cycles. As AI agents increasingly participate in software development, teams must architect these interactions strategically, breaking complex problems into manageable chunks and building contextual workflows that align with business objectives. Ensuring quality & security (adaptability and continuous learning) As AI takes a more proactive role in software development, companies should develop specialised QA processes tailored to the unique error patterns and risks of AI-generated code. This should include validating AI reasoning processes, employing adversarial testing for both prompts and code, leveraging formal methods for critical components where appropriate, and implementing advanced, defense-in-depth prompt security measures. Organisations are responding by implementing "prompt security" practices to prevent injection attacks and establishing specialised review processes for AI-generated code. They're creating adversarial testing frameworks that deliberately challenge AI outputs with unusual inputs while maintaining human oversight at critical decision points. This represents a fundamental evolution from traditional debugging approaches to validating AI reasoning processes and ensuring business logic alignment—a necessary adaptation as AI becomes more autonomous in software development workflows. Follow us on: Share on:

GenAI adoption surges in healthcare but security hurdles remain
GenAI adoption surges in healthcare but security hurdles remain

Techday NZ

time14 hours ago

  • Techday NZ

GenAI adoption surges in healthcare but security hurdles remain

Ninety-nine percent of healthcare organisations are now making use of generative artificial intelligence (GenAI), according to new global research from Nutanix, but almost all say they face challenges in data security and scaling these technologies to production. The findings are drawn from the seventh annual Healthcare Enterprise Cloud Index (ECI) report by Nutanix, which surveyed 1,500 IT and engineering decision-makers across multiple industries and regions, including the healthcare sector. The research highlights both rapid uptake of GenAI in healthcare settings and significant ongoing barriers around infrastructure and privacy. GenAI use widespread, but risks loom Among healthcare organisations surveyed, a striking 99% said they are currently leveraging GenAI applications or workloads, such as AI-powered chatbots, code co-pilots and tools for clinical development automation. This sector now leads all other industries in GenAI adoption, the report found. However, nearly as many respondents—96%—admitted their existing data security and governance were not robust enough to support GenAI at scale. Additionally, 99% say scaling from pilot or development to production remains a serious challenge, with integration into existing IT systems cited as the most significant barrier to wider deployment. "In healthcare, every decision we make has a direct impact on patient outcomes - including how we evolve our technology stack," said Jon Edwards, Director IS Infrastructure Engineering at Legacy Health. "We took a close look at how to integrate GenAI responsibly, and that meant investing in infrastructure that supports long-term innovation without compromising on data privacy or security. We're committed to modernising our systems to deliver better care, drive efficiency, and uphold the trust that patients place in us." Patient data privacy and security concerns underpin much of this hesitation. The number one challenge flagged by healthcare leaders is the task of integrating GenAI with legacy IT infrastructure (79%), followed by the continued existence of data silos (65%) and ongoing obstacles in developing cloud-native applications and containers (59%). Infrastructure modernisation lags adoption The report stresses that while GenAI uptake is high, inadequate IT modernisation could impede progress. Scaling modern applications such as GenAI requires updated infrastructure solutions capable of handling complex data security, integrity, and resilience demands. Respondents overwhelmingly agree more must be done in this area. Key findings also indicate that improving foundational data security and governance will remain an ongoing priority. Ninety-six percent agree their organisations could still improve the security of their GenAI models and applications, while fears around using large language models (LLMs)—especially with sensitive healthcare data—are prevalent. Scott Ragsdale, Senior Director, Sales - Healthcare & SLED at Nutanix, described the recent surge in GenAI adoption as a departure from healthcare's traditional technology adoption timeline. "While healthcare has typically been slower to adopt new technologies, we've seen a significant uptick in the adoption of GenAI, much of this likely due to the ease of access to GenAI applications and tools. Even with such large adoption rates by organisations, there continue to be concerns given the importance of protecting healthcare data. Although all organisations surveyed are using GenAI in some capacity, we'll likely see more widespread adoption within those organisations as concerns around privacy and security are resolved." Nearly all healthcare respondents (99%) acknowledge difficulties in moving GenAI workloads to production, driven chiefly by the challenge of integrating with existing systems. This indicates that, despite wide experimentation and early deployments, many organisations remain cautious about full-scale rollouts. Containers and cloud-native trends In addition to GenAI, the survey found a rapid expansion in the use of application containerisation and Kubernetes deployments across healthcare. Ninety-nine percent of respondents said they are at least in the process of containerising applications, and 92% note distinct benefits from cloud-native application adoption, such as improved agility and security. Container-based infrastructure is viewed as crucial for enabling secure, seamless access to both patient and business data over hybrid and multicloud environments. As a result, many healthcare IT decision-makers are expected to prioritise modern deployment strategies involving containers for both new and existing workloads. Respondents continue to see GenAI as a path towards improved productivity, automation and efficiency, with major use cases involving customer support chatbots, experience solutions, and code generation tools. Yet, the sector remains grappling with the challenges of scale, security, and complexity inherent to these new technologies. The Nutanix study was conducted by Vanson Bourne in Autumn 2024 and included perspectives from across the Americas, EMEA and Asia-Pacific-Japan.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store