Latest news with #AliceSteinglass


Al Bawaba
19-06-2025
- Business
- Al Bawaba
IT Leaders optimistic on Agentic AI, but Concerned by Organizational Readiness, Research Reveals
As AI adoption accelerates and cyber threats increase, nearly 8 in 10 IT security leaders recognize their security practices need transformation. Salesforce's latest State of IT data also reveals unanimous optimism about AI agents, with 100% of security leaders identifying at least one security concern that could be improved by despite this hope, the global survey of over 2,000 enterprise IT security leaders highlights significant implementation challenges ahead. Nearly half (48%) worry their data foundation isn't set up to get the most out of agentic AI, and over half (55%) aren't fully confident they have the appropriate guardrails to deploy AI it matters: Both the professionals charged with protecting a company's data and systems and the bad actors looking to exploit vulnerabilities are increasingly adding AI to their toolkits. Autonomous AI agents, which help security teams cut down on manual work, can free up humans' time for more complex problem solving. However, agentic AI deployments require robust data infrastructure and governance to be perspective: 'Trusted AI agents are built on trusted data. IT security teams that prioritize data governance will be able to augment their security capabilities with agents while protecting data and staying compliant,' said Alice Steinglass, EVP & GM, Salesforce Platform, Integration and Alkhotani, SVP and GM, Salesforce Middle East, said: 'The latest State of IT report is a cause for both optimism and concern, and also aligns with the concerns we see among organizations in the Middle East. While the research underscores the confidence that organizations have in agentic AI to improve key aspects of their operations and processes, it also reveals significant concerns that must be addressed: It is clear that many IT security leaders are concerned about issues including the readiness of their organization's data foundation for AI, the state of their guardrails to deploy AI agents, and the potential for compliance challenges stemming from AI. Amid these anxieties, it is vital that organizations in the Middle East work with a trusted partner such as Salesforce, enabling them scale up agentic AI quickly, effectively, and ethically.'Security budgets ramp up as threats evolveIn addition to a familiar slate of risks like cloud security threats, malware, and phishing attacks, IT leaders now cite data poisoning — in which malicious actors compromise AI training data sets — among their top concerns. Resources are rising in response: 75% of organizations expect to increase security budgets over the coming regulatory environments add a wrinkle to AI implementationWhile four-fifths of IT security leaders believe AI agents offer compliance opportunities, such as improving adherence to global privacy laws, nearly as many (79%) say they also present compliance challenges. This may stem in part from an increasingly complex and evolving regulatory environment across geographies and industries, and is hampered by compliance processes that remain largely unautomated and prone to error.• Just 47% are fully confident they can deploy AI agents in compliance with regulations and standards.• 83% of organizations say they haven't fully automated their compliance is a cornerstone of successful AI, yet confidence is nascentA recent consumer study found that trust in companies is on a precipitous decline, and three-fifths (60%) agree that advances in AI make a business's trustworthiness more critical. Furthermore, only 42% of consumers trust companies to use AI ethically, a decrease from 58% in 2023. IT security leaders see work to be done in earning this critical trust.• 57% aren't fully confident in the accuracy or explainability of their AI outputs.• 60% don't provide full transparency into how customer data is used in AI.• 59% haven't perfected their ethical guidelines for AI governance is a linchpin in enterprises' agentic evolutionNearly half of IT security leaders aren't sure they have the quality data to underpin agents, or that they could deploy the technology with the right permissions, policies, and guardrails, but progress is being made. A recent survey of CIOs found that four times as much budget was allocated to data infrastructure and management than AI, a signal that organizations were smartly laying the right groundwork for broader agents offer a salve as adoption ramps upAccording to the State of IT research, over 40% of IT security teams already use agents in their day-to-day operations — a figure that's anticipated to nearly double over the next two years. IT security leaders expect a range of benefits as their use of agents ramps up, ranging from threat detection to sophisticated auditing of AI model performance. Three quarters (75%) expect to use AI agents within two years — up from 41% overhauls are on tapIn addition to the steps these teams must take to shore up their data foundations for the agentic era, over half admit they have work to do to bring their overall security and compliance practices up to par. Forty-seven percent believe their security and compliance practices are fully prepared for AI agent development and customer view: Arizona State University (ASU) is among the first universities to leverage Agentforce, Salesforce's digital labor platform for augmenting teams with trusted autonomous AI agents in the flow of work. ASU stresses the need for data relevancy, especially as the university advances its AI initiatives. ASU implemented Salesforce-acquired Own backup, recovery, and archiving solutions, providing ASU with a comprehensive approach to data management, addressing their needs for backup, recovery, compliance, and innovation deeper:• Read the full State of IT: Security report• Learn how Salesforce is powering a smarter agentic future with new governance enhancements• Discover additional State of IT insights from the developer perspective• Read more on why trust and guardrails are even more critical in the age of AIMethodology: Data is sourced from a security, privacy, and compliance leader segment of a double-anonymous survey of IT decision-makers conducted from December 24, 2024 through February 3, 2025. Respondents represented Australia, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Mexico, the Netherlands, New Zealand, Norway, Portugal, Singapore, South Korea, Spain, Sweden, Switzerland, Thailand, the United Arab Emirates, the United Kingdom, and the United States.


Al Bawaba
18-06-2025
- Business
- Al Bawaba
IT Leaders in UAE optimistic on Agentic AI, but Concerned by Organizational Readiness, Research Reveals
As AI adoption accelerates and cyber threats increase, nearly eight in 10 IT security leaders in the UAE recognize their security practices need transformation. Salesforce's latest State of IT data also reveals unanimous optimism about AI agents, with 100% of security leaders in the UAE and globally identifying at least one security concern that could be improved by despite this hope, the global survey of over 2,000 enterprise IT security leaders from more than 24 countries including the UAE, highlights significant implementation challenges ahead. Some 64% of UAE enterprise IT security leaders worry their data foundation isn't set up to get the most out of agentic AI, compared to 48% globally, and 42% aren't fully confident they have the appropriate guardrails to deploy AI it matters: Both the professionals charged with protecting a company's data and systems and the bad actors looking to exploit vulnerabilities are increasingly adding AI to their toolkits. Autonomous AI agents, which help security teams cut down on manual work, can free up humans' time for more complex problem solving. However, agentic AI deployments require robust data infrastructure and governance to be perspective: 'Trusted AI agents are built on trusted data. IT security teams that prioritize data governance will be able to augment their security capabilities with agents while protecting data and staying compliant,' said Alice Steinglass, EVP & GM, Salesforce Platform, Integration and Automation Mohammed Alkhotani, SVP and GM, Salesforce Middle East, said: 'The latest State of IT report is a cause for both optimism and concern. While the research underscores the confidence that organizations in the UAE have in agentic AI to improve key aspects of their operations and processes, it also reveals significant concerns that must be addressed: It is clear that many IT security leaders in the UAE are concerned about issues including the readiness of their organization's data foundation for AI, the state of their guardrails to deploy AI agents, and the potential for compliance challenges stemming from AI. Amid these anxieties, it is vital that organizations in the UAE and the wider Middle East work with a trusted partner such as Salesforce, enabling them scale up agentic AI quickly, effectively, and ethically.'In addition to a familiar slate of risks like cloud security threats, malware, and phishing attacks, IT leaders now cite data poisoning — in which malicious actors compromise AI training data sets — among their top concerns. Resources are rising in response: 74% of organizations in the UAE expect to increase security budgets over the coming regulatory environments add a wrinkle to AI implementation While 84% of IT security leaders in the UAE believe AI agents offer compliance opportunities, such as improving adherence to global privacy laws, nearly as many (86%) say they also present compliance challenges. This may stem in part from an increasingly complex and evolving regulatory environment across geographies and industries, and is hampered by compliance processes that remain largely unautomated and prone to error.• Just 42% of UAE organizations are fully confident they can deploy AI agents in compliance with regulations and standards.• 90% of organizations in the UAE say they haven't fully automated their compliance is a cornerstone of successful AI, yet confidence is nascentA recent consumer study found that trust in companies is on a precipitous decline, and three-fifths (60%) agree that advances in AI make a business's trustworthiness more critical. Furthermore, only 42% of consumers trust companies to use AI ethically, a decrease from 58% in 2023. IT security leaders see work to be done in earning this critical trust.• 57% of organizations globally aren't fully confident in the accuracy or explainability of their AI outputs.• 60% of organizations globally don't provide full transparency into how customer data is used in AI.• 59% of organizations globally haven't perfected their ethical guidelines for AI governance is a linchpin in enterprises' agentic evolutionNearly half of IT security leaders in the UAE aren't sure they have the quality data to underpin agents, or that they could deploy the technology with the right permissions, policies, and guardrails, but progress is being made. A recent survey of CIOs found that four times as much budget was allocated to data infrastructure and management than AI, a signal that organizations were smartly laying the right groundwork for broader agents offer a salve as adoption ramps upAccording to the State of IT research, 32% of IT security teams in the UAE already use agents in their day-to-day operations — a figure that's anticipated to nearly double over the next two years. IT security leaders expect a range of benefits as their use of agents ramps up, ranging from threat detection to sophisticated auditing of AI model performance. 80% of UAE organizations expect to use AI agents within two years — up from 32% overhauls are on tapIn addition to the steps these teams must take to shore up their data foundations for the agentic era, over half of teams globally admit they have work to do to bring their overall security and compliance practices up to par. Just 38% of UAE IT security teams believe their security and compliance practices are fully prepared for AI agent development and customer view: Arizona State University (ASU) is among the first universities to leverage Agentforce, Salesforce's digital labor platform for augmenting teams with trusted autonomous AI agents in the flow of work. ASU stresses the need for data relevancy, especially as the university advances its AI initiatives. ASU implemented Salesforce-acquired Own backup, recovery, and archiving solutions, providing ASU with a comprehensive approach to data management, addressing their needs for backup, recovery, compliance, and innovation deeper:• Read the full State of IT: Security report• Learn how Salesforce is powering a smarter agentic future with new governance enhancements• Discover additional State of IT insights from the developer perspective• Read more on why trust and guardrails are even more critical in the age of AIMethodology: Data is sourced from a security, privacy, and compliance leader segment of a double-anonymous survey of IT decision-makers conducted from December 24, 2024 through February 3, 2025. Respondents represented Australia, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Mexico, the Netherlands, New Zealand, Norway, Portugal, Singapore, South Korea, Spain, Sweden, Switzerland, Thailand, the United Arab Emirates, the United Kingdom, and the United States.


Techday NZ
10-06-2025
- Business
- Techday NZ
Agentic AI adoption rises in ANZ as firms boost security spend
New research from Salesforce has revealed that all surveyed IT security leaders in Australia and New Zealand (ANZ) believe that agentic artificial intelligence (AI) can help address at least one digital security concern within their organisations. According to the State of IT report, the deployment of AI agents in security operations is already underway, with 36 per cent of security teams in the region currently using agentic AI tools in daily activities—a figure projected to nearly double to 68 per cent over the next two years. This surge in AI adoption is accompanied by rising investment, as 71 per cent of ANZ organisations plan to increase their security budgets in the coming year. While slightly lower than the global average (75 per cent), this signals a clear intent within the region to harness AI for strengthening cyber defences. AI agents are being relied upon for tasks ranging from faster threat detection and investigation to sophisticated auditing of AI model performance. Alice Steinglass, Executive Vice President and General Manager of Salesforce's Platform, Integration, and Automation division, said, "Trusted AI agents are built on trusted data. IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant." The report also highlights industry-wide optimism about AI's potential to improve security but notes hurdles in implementation. Globally, 75 per cent of surveyed leaders recognise their security practices need transformation, yet 58 per cent are concerned their organisation's data infrastructure is not yet capable of supporting AI agents to their full potential. As both defenders and threat actors add AI to their arsenals, the risk landscape is evolving. Alongside well-known risks such as cloud security threats, malware, and phishing attacks, data poisoning has emerged as a new top concern. Data poisoning involves malicious actors corrupting AI training data sets to subvert AI model behaviour. This, together with insider threats and cloud risks, underscores the need for robust data governance and infrastructure. Across the technology sector, the expanding use of AI agents is rapidly reshaping industry operations. Harsha Angeri, Vice President of Corporate Strategy and Head of AI Business at Subex, noted that AI agents equipped with large language models (LLMs) are already impacting fraud detection, business support systems (BSS), and operations support systems (OSS) in telecommunications. "We are seeing opportunities for fraud investigation using AI agents, with great interest from top telcos," Angeri commented, suggesting this development is altering longstanding approaches to software and systems architecture in the sector. The potential of agentic AI extends beyond security and fraud prevention. Angeri highlighted the emergence of the "Intent-driven Network", where user intent is seamlessly translated into desired actions by AI agents. In future mobile networks, customers might simply express their intentions—like planning a family holiday—and rely on AI-driven networks to autonomously execute tasks, from booking arrangements to prioritising network resources for complex undertakings such as drone data transfers. This approach coins the term "Intent-Net", promising hyper-personalisation and real-time orchestration of digital services. The rapid penetration of AI chips in mobile devices also signals the mainstreaming of agentic AI. Angeri stated that while only about 4 to 5 per cent of smartphones had AI chips in 2023, this figure has grown to roughly 16 per cent and is expected to reach 50 per cent by 2028, indicating widespread adoption of AI-driven mobile services. However, industry experts caution that agentic AI comes with considerable technical and operational challenges. Yuriy Yuzifovich, Chief Technology Officer for AI at GlobalLogic, described how agentic AI systems, driven by large language models, differ fundamentally from classical automated systems. "Their stochastic behaviour, computational irreducibility, and lack of separation between code and data create unique obstacles that make designing resilient AI agents uniquely challenging," he said. Unlike traditional control systems where outcomes can be rigorously modelled and predicted, AI agents require full execution to determine behaviour, often leading to unpredictable outputs. Yuzifovich recommended that enterprises adopt several key strategies to address these challenges: using domain-specific languages to ensure reliable outputs, combining deterministic classical AI with generative approaches, ensuring human oversight for critical decisions, and designing with modularity and extensive observability for traceability and compliance. "By understanding the limitations and potentials of each approach, we can design agentic systems that are not only powerful but also safe, reliable, and aligned with human values," he added. As businesses across sectors embrace agentic AI, the next years will test the ability of enterprises and technology vendors to balance innovation with trust, resilience, and security. With rapid advancements in AI agent deployment, the industry faces both the opportunity to transform digital operations and the imperative to manage the associated risks responsibly.


Techday NZ
09-06-2025
- Business
- Techday NZ
AI agents to play key role in ANZ IT security, report finds
The latest Salesforce State of IT report indicates that IT security leaders in Australia and New Zealand anticipate AI agents will address at least one of their organisation's digital security issues. The survey reveals that all respondents see a role for AI agents in assisting with IT security, with 36 per cent of IT security teams in the region currently using such agents in their daily operations. The proportion of security teams using AI agents is expected to grow rapidly, with predictions it will reach 68 per cent within the next two years. According to the findings, 71 per cent of organisations in Australia and New Zealand are planning to increase their security budgets during the year ahead, just below the global average of 75 per cent. AI agents were highlighted as being capable of supporting various tasks, including faster threat detection, more efficient investigations, and comprehensive auditing of AI model performance. The global survey, which included more than 2,000 enterprise IT security leaders—with 100 respondents from Australia and New Zealand—also pointed to several challenges associated with adopting AI in security practices. Despite widespread recognition that practices need to evolve, with 75 per cent of respondents acknowledging the need for transformation, 58 per cent expressed concern that their organisations' data infrastructure was not yet ready to maximise the potential of AI agents. "Trusted AI agents are built on trusted data," said Alice Steinglass, EVP & GM, Salesforce Platform, Integration, and Automation. "IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant." The report noted that while both IT professionals and malicious actors are integrating AI into their operations, autonomous AI agents offer an opportunity for security teams to reduce manual workloads and focus on more complex challenges. However, deploying agentic AI successfully requires a strong foundation in data infrastructure and governance. In addition to familiar threats such as cloud security vulnerabilities, malware, and phishing, the report found that IT leaders now also rank data poisoning within their top three concerns. Data poisoning involves the manipulation of AI training data sets by malicious actors. This concern is cited alongside cloud security threats and insider or internal threats. Follow us on: Share on:


Techday NZ
05-06-2025
- Business
- Techday NZ
AI agent adoption rises among IT security leaders in ANZ
IT security leaders in Australia and New Zealand see considerable promise in adopting AI agents to address key security concerns, according to results from the Salesforce State of IT report. The survey of over 2,000 enterprise IT security leaders globally, including 100 from Australia and New Zealand (ANZ), found that every respondent identified at least one security issue they believe could be improved with the use of AI agents. According to the report, 36 per cent of IT security teams in ANZ have already integrated AI agents into their daily operations. This figure is expected to nearly double to 68 per cent within the next two years, signalling a significant upward trend in AI agent deployment across the region. While optimism around the benefits of AI agents is evident, the report also highlights several challenges facing the region's IT security leaders. Notably, 58 per cent of ANZ respondents expressed concerns that their data foundations are not robust enough to fully realise the benefits of agentic AI. An equal proportion are not fully confident that appropriate safeguards and guardrails are in place for the safe deployment of these agents. Despite these concerns, there is momentum towards greater adoption. Security leaders anticipate a range of benefits from AI agents, from enhancing threat detection capabilities to providing more robust auditing of AI model performance. Alice Steinglass, Executive Vice President and General Manager, Salesforce Platform, Integration, and Automation, said: "Trusted AI agents are built on trusted data. IT security teams that prioritise data governance will be able to augment their security capabilities with agents while protecting data and staying compliant." The survey also indicates that up to 75 per cent of IT security leaders in Australia and New Zealand acknowledge that their current security practices require transformation. While 74 per cent see AI agents as offering opportunities to improve compliance, such as with privacy laws, nearly 83 per cent also identified ongoing compliance challenges as a key concern. Cloud security threats, insider risks, and data poisoning—where malicious actors compromise AI training datasets—were listed among the most pressing risks for IT security leaders. In light of these evolving threats, 71 per cent of ANZ organisations expect to increase their security budgets in the coming year. This is only slightly below the global average of 75 per cent, reflecting a broad trend towards higher investment in IT security resources. Complex and changing regulatory environments continue to pose challenges for IT teams aiming to deploy AI agents. Only 48 per cent of ANZ IT security leaders said they are fully confident that they can deploy AI agents in ways that are compliant with relevant regulations and industry standards. Furthermore, 85 per cent of organisations have not yet fully automated their compliance processes, leaving room for error and inefficiency. There are signs of readiness and confidence among ANZ security leaders when it comes to their security and compliance practices. Sixty-one per cent believe their organisations are prepared for the development and implementation of AI agents. This figure is higher than both the Asia-Pacific average of 57 per cent and the global average of 47 per cent, indicating the region's relatively strong position in preparing for the AI era. Maintaining effective data governance is seen as crucial to enabling the successful adoption of AI agents. Nevertheless, more than half of IT security leaders in ANZ are not confident that their organisations have adequate data quality or the right infrastructure required for AI deployment. Globally, CIOs report that budgets for data infrastructure and management are four times greater than those for AI itself, suggesting a focus on laying solid foundations before wider adoption of AI tools. Building trust remains central to the deployment of AI within enterprises. Recent research indicates a decline in general consumer trust, with three-quarters of Australian consumers stating they trust companies less than a year ago. Additionally, 69 per cent believe advances in AI make trust increasingly important. Among ANZ security leaders, 59 per cent have not fully established ethical guidelines for AI use, 68 per cent lack complete confidence in the accuracy and explainability of AI outputs, and 58 per cent do not provide full transparency in the use of customer data for AI purposes. Arizona State University is among the first higher education institutions to implement Salesforce's Agentforce digital labour platform, using trusted autonomous AI agents in operational workflows. The university has placed particular attention on data relevancy as it broadens its AI initiatives, and has adopted Salesforce-acquired backup, recovery, and archiving solutions to address its data management, compliance, and innovation needs. The findings of the Salesforce State of IT report are based on a double-anonymous survey conducted between December 2024 and February 2025 across a broad range of countries including Australia, New Zealand, the United Kingdom, the United States, and others. One hundred respondents represented Australia and New Zealand.