
IT leaders embrace hybrid AI strategies amid rising challenges
The report, based on responses from global IT decision makers, reveals that 96 per cent of organisations have now deployed AI models, a substantial increase from a quarter in 2023. This points to a notable shift in approach, as leaders trust AI to perform functions ranging from traffic management to cost optimisation.
Nearly three-quarters of respondents (72 per cent) expressed intentions to use AI for optimising application performance. Additionally, 59 per cent indicated support for leveraging AI to assist with cost optimisation and to integrate security rules, enabling automatic mitigation of zero-day vulnerabilities.
The adoption of AI gateways—tools that connect applications to AI services—has also risen. Half of organisations presently use AI gateways, and another 40 per cent anticipate adopting them within the next year. The principal application of these gateways includes protecting and managing AI models (62 per cent), serving as central points of control (55 per cent), and preventing sensitive data leaks (55 per cent).
Lori MacVittie, Distinguished Engineer at F5, commented on the findings: "This year's SOAS Report shows that IT decision makers are becoming confident about embedding AI into ops. We are fast moving to a point where AI will be trusted to operate autonomously at the heart of an organisation, generating and deploying code that helps to cut costs, boost efficiency, and mitigate security problems. That is what we mean when we talk about AIOps, and it is now becoming a reality."
Despite heightened enthusiasm, the report highlights ongoing operational barriers. Security of AI models remains the top concern for organisations currently deploying such models. Operational readiness, in particular, is a challenge, with 60 per cent citing manual workflows as a hindrance and 54 per cent reporting skills shortages that complicate AI development efforts.
Budgetary constraints also persist. Forty-eight per cent identified the costs associated with building and operating AI workloads as problematic, up from 42 per cent last year.
Data practices continue evolving, with a higher proportion of organisations indicating that their data handling is not yet scalable (39 per cent, compared to 33 per cent in 2024). Trust in AI outputs, due to potential bias or erroneous results, is another issue, with 34 per cent expressing a lack of trust, compared to 27 per cent previously. However, there has been an improvement in perceived data quality, as 48 per cent reported concerns this year, down from 56 per cent the year before.
The increased integration of APIs also brings its own difficulties. Some 58 per cent of respondents noted APIs as a pain point, with certain organisations dedicating as much as half of their time to managing complex API configurations and coding languages. The most time-consuming tasks involve vendor APIs (31 per cent), custom scripting (29 per cent), and integrating with ticketing or management systems (23 per cent).
MacVittie observed, "Organisations need to focus on the simplification and standardisation of operations, including streamlining APIs, technologies, and tasks. They should also recognise that AI systems are themselves well-suited to handle complexity autonomously by generating and deploying policies or solving workflow issues. Operational simplicity is not just something on which AI is going to rely, but which it will itself help to deliver."
The report identifies a shift towards hybrid cloud architectures, with 94 per cent of organisations running applications across multiple environments, including public and private clouds, on-premises data centres, edge, and colocation facilities. This approach seeks to balance scalability, cost, and compliance needs.
Adaptability was cited as a major advantage of multi-cloud deployments, with 91 per cent of decision makers noting the ability to respond to changing business requirements, followed by improved application resiliency (68 per cent) and cost savings (59 per cent).
Most organisations now use a hybrid deployment approach for AI workloads as well, with 51 per cent maintaining models across both cloud and on-premises environments. An increased number of organisations have also repatriated one or more applications from public cloud to on-premises or colocation for reasons relating to cost, security, and predictability—79 per cent reported having done so, up significantly from 13 per cent four years prior.
The complexity of managing hybrid environments is not without its challenges. Inconsistent delivery policies were reported by 53 per cent, while fragmented security strategies were noted by 47 per cent of respondents.
Cindy Borovick, Director of Market and Competitive Intelligence at F5, said, "While spreading applications across different environments and cloud providers can bring challenges, the benefits of being cloud-agnostic are too great to ignore. It has never been clearer that the hybrid approach to app deployment is here to stay."
Data from Asia Pacific, China, and Japan (APCJ) reflects these global trends. Almost half (49 per cent) of APCJ organisations already employ AI gateways, with a further 46 per cent set to follow within the coming year. Their main objectives are protecting AI models (66 per cent), preventing sensitive data leaks (61 per cent), and monitoring AI application demand (61 per cent). Over half (53 per cent) struggle with data maturity, and 45 per cent are concerned about the cost of AI deployments. The hybrid model introduces additional complexity, with 79 per cent reporting inconsistent security policies, 59 per cent noting inconsistent delivery, and 16 per cent citing operational difficulties.
The report suggests a way forward through the creation of programmable IT environments that standardise and automate application delivery and security. By 2026, AI is anticipated to move beyond isolated tasks to managing entire IT processes. Platforms with natural language interfaces and programmable features are expected to streamline workflows, reducing the need for conventional management consoles.
Borovick added, "Flexibility and automation are no longer optional—they are critical for navigating complexity and driving transformation at scale. Organisations that establish programmable foundations will not only enhance AI's potential but create IT strategies capable of scaling, adapting, and delivering exceptional customer experiences in the modern age."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
an hour ago
- Techday NZ
Sensitive data exposure rises with employee use of GenAI tools
Harmonic Security has released its quarterly analysis finding that a significant proportion of data shared with Generative AI (GenAI) tools and AI-enabled SaaS applications by employees contains sensitive information. The analysis was conducted on a dataset comprising 1 million prompts and 20,000 files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June. According to the findings, 22% of files (total 4,400) and 4.37% of prompts (total 43,700) included sensitive data. The categories of sensitive data encompassed source code, access credentials, proprietary algorithms, merger and acquisition (M&A) documents, customer or employee records, and internal financial information. Use of new GenAI tools The data highlights that in the second quarter alone, organisations on average saw employees begin using 23 previously unreported GenAI tools. This expanding variety of tools increases the administrative load on security teams, who are required to vet each tool to ensure it meets security standards. A notable proportion of AI tool use occurs through personal accounts, which may be unsanctioned or lack sufficient safeguards. Almost half (47.42%) of sensitive uploads to Perplexity were made via standard, non-enterprise accounts. The numbers were lower for other platforms, with 26.3% of sensitive data entering ChatGPT through personal accounts, and just 15% for Google Gemini. Data exposure by platform Analysis of sensitive prompts identified ChatGPT as the most common origin point in Q2, accounting for 72.6%, followed by Microsoft Copilot with 13.7%, Google Gemini at 5.0%, Claude at 2.5%, Poe at 2.1%, and Perplexity at 1.8%. Code leakage represented the most prevalent form of sensitive data exposure, particularly within ChatGPT, Claude, DeepSeek, and Baidu Chat. File uploads and risks The report found that, on average, organisations uploaded 1.32GB of files in the second quarter, with PDFs making up approximately half of all uploads. Of these files, 21.86% contained sensitive data. The concentration of sensitive information was higher in files compared to prompts. For example, files accounted for 79.7% of all stored credit card exposure incidents, 75.3% of customer profile leaks, and 68.8% of employee personally identifiable information (PII) incidents. Files accounted for 52.6% of exposure volume related to financial projections. Less visible sources of risk GenAI risk does not only arise from well-known chatbots. Increasingly, regular SaaS tools that integrate large language models (LLMs) - often without clear labelling as GenAI - are becoming sources of risk as they access and process sensitive information. Canva was reportedly used for documents containing legal strategy, M&A planning, and client data. Replit and were involved with proprietary code and access keys, while Grammarly and Quillbot edited contracts, client emails, and internal legal content. International exposure Use of Chinese GenAI applications was cited as a concern. The study found that 7.95% of employees in the average enterprise engaged with a Chinese GenAI tool, leading to 535 distinct sensitive exposure incidents. Within these, 32.8% were related to source code, access credentials, or proprietary algorithms, 18.2% involved M&A documents and investment models, 17.8% exposed customer or employee PII, and 14.4% contained internal financial data. Preventative measures "The good news for Harmonic Security customers is that this sensitive customer data, personally identifiable information (PII), and proprietary file contents never actually left any customer tenant, it was prevented from doing so. But had organizations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data." Harmonic Security Chief Executive Officer and Co-founder Alastair Paterson made this statement, referencing the protections offered to their customers and the wider risks posed by the pervasive nature of embedded AI within workplace tools. Harmonic Security advises enterprises to seek visibility into all tool usage – including tools available on free tiers and those with embedded AI – to monitor the types of data being entered into GenAI systems and to enforce context-aware controls at the data level. The recent analysis utilised the Harmonic Security Browser Extension, which records usage across SaaS and GenAI platforms and sanitises the information for aggregate study. Only anonymised and aggregated data from customer environments was used in the analysis.


Scoop
5 hours ago
- Scoop
Statement On AI In Universities From Aotearoa Communication & Media Scholars Network
We speak as a network of Aotearoa academics working in the inter-disciplines of Communication and Media Studies across our universities. Among us we have shared expertise in the political, social and economic impacts of commercially distributed and circulated generative artificial intelligence ('AI') in our university workplaces. While there is a tendency in our universities to be resigned to AI as an unstoppable and unquestionable technological force, our aim is to level the playing field to promote open critical and democratic debate. With this in mind, we make the following points: For universities… · AI is not an inevitable technological development which must be incorporated into higher education; rather it is the result of particular techno-capitalist ventures, a context which needs to be recognised and considered; · AI, as a corporate product of private companies such as OpenAI, Google, etc., encroaches on the public role of the university and its role as critic and conscience, and marginalises voices which might critique business interests; For researchers… · AI impedes rather than supports productive intellectual work because it erodes important critical thinking skills; instead, it devolves human scholarly work and critical engagement with ideas–elements vital to our cultural and social life–to software that produces 'ready-made', formulaic and backward looking 'results' that do not advance knowledge; · AI promotes an unethical, reckless approach to research which can promote 'hallucinations' and over valorise disruption for its own sake rather than support quality research; · AI normalises industrial scale theft of intellectual property as our written work is fed into AI datasets largely without citation or compensation; · AI limits the productivity of academic staff by requiring them to invent new forms of assessment which subvert AI, police students and their use of AI, or assess lengthy 'chat logs', rather than engage with students in activities and assessments that require deep, critical thinking and sharing, questioning and articulating ideas with peers; For students… · AI tools create anxiety for students; some are falsely-accused of using generative-AI when they haven't, or are very stressed that it could happen to them; · AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care; · AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust; For Aotearoa New Zealand… · AI clashes with Te Tiriti obligations around data sovereignty and threatens the possibility of data colonialism regarding te reo itself; · AI is devastating for the environment in terms of energy and water use and the extraction of natural resources needed for the processors that AI requires. Signed by: Rosemary Overell, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Olivier Jutel, Lecturer, Media, Film & Communications Programme, The University of Otago Emma Tennent, Senior Lecturer, Media & Communication, Te Herenga Waka Victoria University of Wellington Rachel Billington, Lecturer, Media, Film & Communications Programme, The University of Otago Brett Nicholls, Senior Lecturer, Media, Film & Communications Programme, The University of Otago Yuki Watanabe, Lecturer, Media, Film & Communications Programme, The University of Otago Sy Taffel, Senior Lecturer, Media Studies Programme, Massey University Leon Salter, Senior Lecturer, Communications Programme, University of Auckland Angela Feekery, Senior Lecturer, Communications Programme, Massey University Ian Huffer, Senior Lecturer, Media Studies Programme, Massey University Pansy Duncan, Senior Lecturer, Media Studies Programme, Massey University Kevin Veale, Senior Lecturer, Media Studies Programme, Massey University Peter A. Thompson, Associate Professor, Media & Communication Programme, Te Herenga Waka/Victoria University of Wellington Nicholas Holm, Associate Professor, Media Studies Programme, Massey University Sean Phelan, Associate Professor, Massey University Yuan Gong, Senior Lecturer, Media Studies Programme, Massey University Chris McMillan, Teaching Fellow, Sociology Programme, University of Auckland Cherie Lacey, Researcher, Centre for Addiction Research, University of Auckland Thierry Jutel, Associate Professor, Film, Te Herenga Waka, Victoria University of Wellington Max Soar, Teaching Fellow, Political Communication, Te Herenga Waka Victoria University of Wellington Lewis Rarm, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Tim Groves, Senior Lecturer, Film. Te Herenga Waka, Victoria University of Wellington Valerie Cooper, Lecturer, Media and Communication, Te Herenga Waka | Victoria University of Wellington Wayne Hope, Professor, Faculty of Design & Creative Technologies, Auckland University of Technology Greg Treadwell, senior lecturer in journalism, School of Communication Studies, Auckland University of Technology Christina Vogels, Senior Lecturer, Critical Media Studies, School of Communication Studies, Auckland University of Technology

RNZ News
21 hours ago
- RNZ News
Amazon profits surge 35% as AI investments drive growth
By AFP Despite the stellar results, investors seemed worried about Amazon's big cash outlays to pursue its AI ambitions. Photo: 123RF Amazon has reported a 35 percent jump in quarterly profits as the e-commerce giant says major investments in artificial intelligence has been paying off. The Seattle-based company posted net profit of $18.2 billion (NZ$30.9 billion) for the second quarter that ended June 30, compared with $13.5 billion (NZ$22.9 billion) in the same period last year. Net sales climbed 13 percent to $167.7 billion (NZ$284.7 billion), beating analyst expectations and signalling that the global company was surviving the impacts of the high-tariff trade policy under US President Donald Trump. "Our conviction that AI will change every customer experience is starting to play out," chief executive Andy Jassy said, pointing to the company's expanded Alexa+ service and new AI shopping agents. Amazon Web Services (AWS), the company's world leading cloud computing division, led the charge with sales jumping 17.5 percent to $30.9 billion (NZ$52.45 billion). The unit's operating profit rose to $10.2 billion (NZ$17.3 billion) from $9.3 billion (NZ$15.8 billion) a year earlier. The strong AWS performance reflects surging demand for cloud infrastructure to power AI applications, a trend that has benefited major cloud providers as companies race to adopt generative AI technologies. Despite the stellar results, investors seemed worried about Amazon's big cash outlays to pursue its AI ambitions, sending its share price more than three percent lower in after-hours trading. The company's free cash flow declined sharply to $18.2 billion (NZ$30.9 billion) for the trailing 12 months, down from $53 billion (NZ$90 billion) in the same period last year, as Amazon ramped up capital spending on AI infrastructure and logistics. The company spent $32.2 billion (NZ$54.7 billion) on property and equipment in the quarter, nearly double the $17.6 billion (NZ$29.9 billion) spent a year earlier, reflecting massive investments in data centres and backroom capabilities. Amazon has pledged to spend up to $100 billion (NZ$169.8 billion) this year, largely on AI-related investments for AWS. For the current quarter, Amazon forecast net sales between $174.0 billion (NZ$295 billion) and $179.5 billion (NZ$304.8 billion), representing solid growth of 10-13 percent compared with the third quarter of 2024. Operating profit was expected to range from $15.5 billion (NZ$26.3 billion) to $20.5 billion (NZ$34.8 billion) in the current third quarter, which was lower than some had hoped for and likely also a factor in investor disappointment. - AFP