logo
F5 2025 State Of Application Strategy Report Reveals Talk Becomes Action As AI Gets To Work

F5 2025 State Of Application Strategy Report Reveals Talk Becomes Action As AI Gets To Work

Scoop08-05-2025

Press Release – F 5
F5s 2025 State of Application Strategy Report, which surveys global IT decision makers, found that 96 per cent of organisations are now deploying AI models, up from a quarter in 2023.
F5 Report Highlights AI-Driven Transformation Amid Operational Complexity
96 per cent of surveyed IT decision-makers have deployed AI models, up from a quarter in 2023
SYDNEY, AUSTRALIA, May 8, 2025 – IT leaders are increasingly trusting AI with business-critical tasks from traffic management to cost optimisation, according to the industry's most comprehensive report on application strategy.
F5's 2025 State of Application Strategy Report, which surveys global IT decision makers, found that 96 per cent of organisations are now deploying AI models, up from a quarter in 2023.
There is also a growing willingness to elevate AI to the heart of business operations. Almost three-quarters of respondents (72 per cent) said they want to use AI to optimise app performance, whereas 59 per cent support the use of AI for both cost-optimisation and to inject security rules, automatically mitigating zero-day vulnerabilities.
Today, half of organisations are using AI gateways to connect applications to AI tools, and another 40 per cent expect to be doing so in the next 12 months. Most are using this technology to protect and manage AI models (62 per cent), provide a central point of control (55 per cent), and to protect their company from sensitive data leaks (55 per cent).
'This year's SOAS Report shows that IT decision makers are becoming confident about embedding AI into ops,' said Lori MacVittie, F5 Distinguished Engineer. 'We are fast moving to a point where AI will be trusted to operate autonomously at the heart of an organisation, generating and deploying code that helps to cut costs, boost efficiency, and mitigate security problems. That is what we mean when we talk about AIOps, and it is now becoming a reality.'
Operational Readiness and API Challenges Remain
Despite growing AI confidence, the SOAS Report highlights several enduring challenges. For organisations currently deploying AI models, the number one concern is AI model security.
And, while AI tools are more autonomous than ever, operational readiness gaps still exist. 60 per cent of organisations feel bogged down by manual workflows, and 54 per cent claim skill shortages are barriers to AI development.
Furthermore, almost half (48 per cent) identified the cost of building and operating AI workloads as a problem, up from 42 per cent last year.
A greater proportion of organisations also said that they have not established a scalable data practice (39 per cent vs. 33 per cent in 2024) and that they do not trust AI outputs due to potential bias or hallucinations (34 per cent vs. 27 per cent). However, fewer complained about the quality of their data (48 per cent, down from 56 per cent last year).
APIs were another concern. 58 per cent reported they have become a pain point, and some organisations spend as much as half of their time managing complex configurations involving numerous APIs and languages. Working with vendor APIs (31 per cent), custom scripting (29 per cent), and integrating with ticketing and management systems (23 per cent) were flagged as the most time-consuming automation-related tasks.
'Organisations need to focus on the simplification and standardisation of operations, including streamlining APIs, technologies, and tasks,' said MacVittie. 'They should also recognise that AI systems are themselves well-suited to handle complexity autonomously by generating and deploying policies or solving workflow issues. Operational simplicity is not just something on which AI is going to rely, but which it will itself help to deliver.'
Hybrid App Deployments Prevail
Allied to soaring AI appetites is a greater reliance on hybrid cloud architectures.
According to the SOAS Report, 94 per cent of organisations are deploying applications across multiple environments – including public clouds, private clouds, on-premises data centres, edge computing, and colocation facilities – to meet varied scalability, cost, and compliance requirements.
Consequently, most decision makers see hybrid environments as critical to their operational flexibility. 91 per cent cited adaptability to fluctuating business needs as the top benefit of adopting multiple clouds, followed by improved app resiliency (68 per cent) and cost efficiencies (59 per cent).
A hybrid approach is also reflected in deployment strategies for AI workloads, with 51 per cent planning to use models across both cloud and on-premises environments for the foreseeable future.
Significantly, 79 per cent of organisations recently repatriated at least one application from the public cloud back to an on-premises or colocation environment, citing cost control, security concerns, and predictability. This marks a dramatic rise from 13 per cent just four years ago, further underscoring the importance of preserving flexibility beyond public cloud reliance.
Still, the hybrid model can prove a headache for some. Inconsistent delivery policies (reported by 53 per cent of respondents) and fragmented security strategies (47 per cent) are all top of mind in this respect.
'While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay,' said Cindy Borovick, Director of Market and Competitive Intelligence, F5.
APCJ AI Adoption and Challenges – Key Highlights:
AI Gateways on the Rise: Nearly half of APCJ organisations (49 per cent) are already using AI gateways to connect applications to AI tools, with another 46 per cent planning to do so in the next 12 months.
Top Use Cases for AI Gateways: Among those leveraging AI gateways, the most common applications include protecting and managing AI models (66 per cent), preventing sensitive data leaks (61 per cent), and observing AI traffic and application demand (61 per cent).
Data and Trust Challenges: Over half (53 per cent) struggle with immature data quality, and 45 per cent are deterred by the high costs of building and running AI workloads.
Hybrid Complexity: The hybrid model of AI deployment introduces hurdles, with 79 per cent citing inconsistent security policies, 59 per cent highlighting delivery inconsistencies, and 16 per cent dealing with operational difficulties.
Toward a Programmable, AI-Driven Future
Looking ahead, the SOAS Report suggests that organisations aiming to unlock AI's full potential should focus on creating programmable IT environments that standardise and automate app delivery and security policies.
By 2026, AI is expected to move from isolated tasks to orchestrating end-to-end processes, marking a shift toward complete automation within IT operations environments. Platforms equipped with natural language interfaces and programmable capabilities will increasingly eliminate the need for traditional management consoles, streamlining IT workflows with unprecedented precision.
'Flexibility and automation are no longer optional—they are critical for navigating complexity and driving transformation at scale,' Borovick emphasised. 'Organisations that establish programmable foundations will not only enhance AI's potential but create IT strategies capable of scaling, adapting, and delivering exceptional customer experiences in the modern age.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI drives 80 percent of phishing with USD $112 million lost in India
AI drives 80 percent of phishing with USD $112 million lost in India

Techday NZ

timean hour ago

  • Techday NZ

AI drives 80 percent of phishing with USD $112 million lost in India

Artificial intelligence has become the predominant tool in cybercrime, according to recent research and data from law enforcement and the cybersecurity sector. AI's growing influence A June 2025 report revealed that AI is now utilised in 80 percent of all phishing campaigns analysed this year. This marks a shift from traditional, manually created scams to attacks fuelled by machine-generated deception. Concurrently, Indian police recorded that criminals stole the equivalent of USD $112 million in a single state between January and May 2025, attributing the sharp rise in financial losses to AI-assisted fraudulent operations. These findings are reflected in the daily experiences of security professionals, who observe an increasing use of automation in social engineering, malware development, and reconnaissance. The pace at which cyber attackers are operating is a significant challenge for current defensive strategies. Methods of attack Large language models are now being deployed to analyse public-facing employee data and construct highly personalised phishing messages. These emails replicate a victim's communication style, job role and business context. Additionally, deepfake technology has enabled attackers to create convincing audio and video content. Notably, an incident in Hong Kong this year saw a finance officer send HK $200 million after participating in a deepfake video call bearing the likeness of their chief executive. Generative AI is also powering the development of malware capable of altering its own code and behaviour within hours. This constant mutation enables it to bypass traditional defences like endpoint detection and sandboxing solutions. Another tactic, platform impersonation, was highlighted by Check Point, which identified fake online ads for a popular AI image generator. These ads redirected users to malicious software disguised as legitimate installers, merging advanced loader techniques with sophisticated social engineering. The overall result is a landscape where AI lowers the barriers to entry for cyber criminals while amplifying the reach and accuracy of their attacks. Regulatory landscape Regulators are under pressure to keep pace with the changing threat environment. The European Union's AI Act, described as the first horizontal regulation of its kind, became effective last year. However, significant obligations affecting general-purpose AI systems will begin from August 2025. Industry groups in Brussels have requested a delay on compliance deadlines due to uncertainty over some of the rules, but firms developing or deploying AI will soon be subject to financial penalties for not adhering to the regulations. Guidance issued under the Act directly links the risks posed by advanced AI models to cybersecurity, including the creation of adaptive malware and the automation of phishing. This has created an expectation that security and responsible AI management are now interrelated priorities for organisations. Company boards are expected to treat the risks associated with generative models with the same seriousness as data protection or financial governance risks. Defensive measures A number of strategies have been recommended in response to the evolving threat environment. Top of the list is the deployment of behaviour-based detection systems that use machine learning in conjunction with threat intelligence, as traditional signature-based tools struggle against ever-changing AI-generated malware. Regular vulnerability assessments and penetration testing, ideally by CREST-accredited experts, are also regarded as essential to expose weaknesses overlooked by both automated and manual processes. Verification protocols for audio and video content are another priority. Using additional communication channels or biometric checks can help prevent fraudulent transactions initiated by synthetic media. Adopting zero-trust architectures, which strictly limit user privileges and segment networks, is advised to contain potential breaches. Teams managing AI-related projects should map inputs and outputs, track possible abuse cases, and retain detailed logs in order to meet audit obligations under the forthcoming EU regulations. Staff training programmes are also shifting focus. Employees are being taught to recognise subtle cues and nuanced context, rather than relying on spotting poor grammar or spelling mistakes as indicators of phishing attempts. Training simulations must evolve alongside the sophistication of modern cyber attacks. The human factor Despite advancements in technology, experts reiterate that people remain a core part of the defence against AI-driven cybercrime. Attackers are leveraging speed and scale, but defenders can rely on creativity, expertise, and interdisciplinary collaboration. "Technology alone will not solve AI‑enabled cybercrime. Attackers rely on speed and scale, but defenders can leverage creativity, domain expertise and cross‑disciplinary thinking. Pair seasoned red‑teamers with automated fuzzers; combine SOC analysts' intuition with real‑time ML insights; empower finance and HR staff to challenge 'urgent' requests no matter how realistic the voice on the call," said Himali Dhande, Cybersecurity Operations Lead at Borderless CS. The path ahead There is a consensus among experts that the landscape has been permanently altered by the widespread adoption of AI. It is increasingly seen as necessary for organisations to shift from responding to known threats to anticipating future methods of attack. Proactive security, embedded into every project and process, is viewed as essential not only for compliance but also for continued protection. Borderless CS stated it, "continues to track AI‐driven attack vectors and integrate them into our penetration‐testing methodology, ensuring our clients stay ahead of a rapidly accelerating adversary. Let's shift from reacting to yesterday's exploits to pre‐empting tomorrow's."

APAC leaders show lowest AI confidence but see steady ROI gains
APAC leaders show lowest AI confidence but see steady ROI gains

Techday NZ

time3 hours ago

  • Techday NZ

APAC leaders show lowest AI confidence but see steady ROI gains

Business leaders across the Asia-Pacific (APAC) region display lower levels of confidence in their organisations' ability to use artificial intelligence (AI) effectively compared to counterparts in other global regions, according to the 2025 Celonis AI Process Optimisation Report. The report, based on a survey of 1,620 global business leaders with a quarter of respondents from APAC and 10% from Australia, highlights that just 82% of APAC executives feel confident in their ability to leverage AI to drive business value. This figure represents the lowest confidence level among the surveyed regions, trailing the United States, where 92% express confidence. Despite this confidence gap, AI adoption in APAC is making measurable progress. Seventy percent of business leaders in the region indicate that their current AI investments are delivering the return on investment expected, a result only slightly behind the US (82%) and ahead of Europe (69%). "The data shows that APAC is not lagging in capability, but more so in confidence," said Pascal Coubard, APAC Lead at Celonis. "AI has no borders. Businesses in Australia and across the region shouldn't see their geography as a disadvantage. The real ROI from AI will come when companies apply it to the operational core of their business, not just on the surface, but across processes like payments, collections, and supply chain execution." The research found high awareness of generative AI (GenAI) globally, with 81% of business leaders now utilising foundational GenAI models for functions such as developer productivity, knowledge management, and customer service. GenAI-powered chatbots or virtual assistants have been deployed by 61% of surveyed organisations worldwide. The United States leads with 75% adoption, followed by APAC at 63% and Europe at 57%. While these use cases are expanding, most organisations see themselves in the early stages of integrating AI. Sixty-four percent of leaders surveyed globally believe AI will generate significant return on investment in the coming year, while nearly three quarters (74%) plan to increase their AI budgets. Expectations are rising with this investment, and 73% of companies are aiming to launch department-specific AI use cases. Process intelligence concerns Confidence in the potential of AI is moderated by process challenges. More than half (58%) of business leaders surveyed feel that the current state of their processes may restrict the benefits AI can deliver, with nearly a quarter (24%) expressing strong agreement on this point. A large majority (89%) state that effective AI deployment requires a deep understanding of organisational processes — described as 'process context.' "Leaders recognise that you can't optimise what you don't understand," said Coubard. "They're increasingly aware that process visibility and intelligence are essential for unlocking the full value of enterprise AI." In APAC, these concerns are particularly pronounced. Sixty-two percent of APAC leaders express worry over a lack of process understanding potentially limiting AI success. This is higher than the figures reported in Europe (60%) and the US (55%). Ninety-three percent of US leaders say AI requires detailed operational context to reach its potential, the highest proportion of any region surveyed. Process mining and departmental perspectives To tackle process visibility challenges, businesses are increasingly turning to process mining technologies. Thirty-nine percent of companies now use process mining, with over half (52%) planning to adopt such tools in the next 12 months. This technology underpins process intelligence initiatives, aiming to provide the context AI systems require. The report notes that confidence in AI varies across organisational departments. Process and Operations leaders are the most confident in current AI use cases (76%), closely followed by Finance & Shared Services (75%), IT (73%), and Supply Chain (68%). However, only 61% of Supply Chain leaders believe AI will deliver significant return on investment in the next year, compared to 66% in Process & Operations and in IT. Year ahead The survey's findings point to 2025 as a pivotal period for enterprise AI. Business leaders are set to increase investment and seek refined strategies to pair AI with appropriate process context, using technologies such as process intelligence to underwrite these efforts. "AI isn't just a tech upgrade, it's a new operating model," said Coubard. "But to maximise the ROI of their AI deployments, businesses need AI powered with the process knowledge and business context provided by Celonis Process Intelligence."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store