Latest news with #OAuth


Techday NZ
a day ago
- Business
- Techday NZ
Most fintechs fail API security, risking sensitive payment data
New research conducted by Raidiam highlights significant weaknesses in API security across fintech companies, SaaS platforms, payments firms, and other enterprises operating outside regulated environments such as Open Banking. The report, which assessed security practices at 68 organisations, reveals that 84% remain vulnerable due to insufficient API protections, even when dealing with sensitive or high-value data. Widespread vulnerabilities The research indicates that 85% of the surveyed organisations handle either payment data or special category personal data, yet only one met the benchmark for modern, cryptographic API protection. The study found that outdated or insufficient controls—such as the use of static API keys and basic OAuth secrets—prevail among most firms, leaving them open to exploitation. "We've all read the recent headlines; API security should not be an afterthought. The gap between the sensitivity of data and the strength of controls is a board-level risk – not just a technical issue," said David Oppenheim, Head of Enterprise Strategy at Raidiam. Of the organisations surveyed, 57 out of 68 use bare API keys or basic OAuth credentials, mechanisms that have well-known security vulnerabilities. Less than half conduct regular API-specific penetration testing or runtime anomaly monitoring, measures deemed essential for identifying and addressing potential attack vectors in real time. Real-world consequences The report points to the 2023 Dell partner API breach as evidence that attackers are already actively exploiting these weak points in enterprise systems. These incidents underscore a growing risk for any entity exposing sensitive APIs without robust protective measures in place. According to the report, a Security vs Sensitivity Matrix mapping exercise revealed a severe misalignment between the sensitivity of the data held and the strength of security controls implemented. This mismatch increases the likelihood and potential impact of security incidents. "We found that even firms handling payment and personal data still rely on static API keys and basic secrets. In today's threat landscape, that's the digital equivalent of leaving the vault door open," Oppenheim added. "In regulated environments like Open Banking, stronger controls like mutual TLS and certificate-bound tokens are already standard. Outside those frameworks, there's a gaping hole." API risk in unregulated environments is becoming a prominent concern in the industry. In early 2025, the Chief Information Security Officer at JPMorgan Chase issued a public warning about rising vulnerabilities linked to third-party platforms, advocating for a shift towards prioritising security over rapid development. Gartner statistics cited in the report indicate that API breaches tend to leak 10 times more data than traditional attacks. The report states, "This isn't theoretical — attackers are already in." Recommendations for addressing risk The report provides a four-step action plan for organisations seeking to bridge the gap between data sensitivity and protection. It recommends elevating API security to a board-level priority, modernising controls through cryptographic methods such as mutual TLS (mTLS) and sender-constrained access tokens, increasing investment in developer awareness and security testing, and working with trusted partners to accelerate adoption of proven standards and infrastructure. Raidiam's expertise in secure digital data-sharing ecosystems is currently being made available to assist enterprise organisations in bringing API security standards up to date and closing the gaps identified by this research. Follow us on: Share on:


Forbes
a day ago
- Business
- Forbes
Okta Tightens Agent Identities For Machine-To-Machine Connections
Black and white cybernetic robot hands pointing at each other When will full autonomy happen? It's the question tabled at every technology vendor meeting these days. The IT industry is frantically building agentic AI services and everybody wants a stake at the table. With various CEOs (including those from Salesforce and Microsoft) both claiming to now hand over somewhere between 30 to 50 percent of work to AI services, the degree to which agents now start talking to agents is of great importance. Human, Out Of The Loop? Until recently, technology advocates and evangelists were fond of mentioning the human-in-the-loop (and human handoff) element when talking about emerging AI services. It was a sort of appropriate lip service that needed mentioning, just to calm the people who worry about the robots taking over. A lot of that has changed and of course Google underlined the trend this April with the introduction of the A2A agentic communications standard. '[We have launched] a new, open protocol called Agent2Agent, with support and contributions from more than 50 technology partners. The A2A protocol will allow AI agents to communicate with each other, securely exchange information and coordinate actions on top of various enterprise platforms or applications. We believe the A2A framework will add significant value for customers, whose AI agents will now be able to work across their entire enterprise application estates,' noted the Google for Developers blog. But where are humans in the loop now? Speaking at a press gathering in London this week, Nutanix CEO Rajiv Ramaswami acknowledged the forthcoming inevitability of agentic intercommunication and said that his firm is working to provide as broad a scope of cloud infrastructure as possible to enable the new (and next) age of AI with simpler (if not pleasingly invisible) cloud services. Acknowledging that the infrastructure comes first… and then agentic identity management comes as a subsequent tier (for which Nutanix itself will look to collaborate with its now significantly expanded partner ecosystem, which has swelled in the wake of VMware's move to Broadcom), Ramaswami called for an understanding into how, when, why and where we weave this new fabric of intelligence. Identity Steps Up If it is time for hardcore identity players to come forward, then identity platform company Okta would arguably rank in the 'usual suspect' lineup in this space. This summer, the company introduced Cross App Access, a new protocol to help secure AI agents. As an extension of the open standard OAuth (technology that provides authorization controls to grant third-party applications access to other resources), Okta says its new services bring control to both agent-driven and app-to-app interactions. In short, it allows developers and data scientists to decide what apps are connecting to what… and what information AI agents can actually access. According to Arnab Bose, chief product officer for Okta platform, more AI tools are using technologes like Model Context Protocol and A2A to connect their AI learning models to relevant data and applications within the enterprise. However, for connections to be established between agents and apps themselves (think about Google Drive or Slack as good examples of applications that an agent might want access to) users need to manually log in and consent to grant the agent access to each integration. Amplified Agentic Explosion Bose says that despite this truth, app-to-app connections occur without oversight, with IT and security teams having to rely on manual and inconsistent processes to gain visibility. This creates a big blind spot in enterprise security and expands an increasingly unmanaged perimeter. This challenge, he says, will be amplified with the explosion of AI agents, which are introducing new, non-deterministic access patterns, crossing system boundaries, triggering actions on their own and interacting with sensitive data. The position at Okta is that 'today's security controls aren't equipped to handle their autonomy, scale and unpredictability' and that existing identity standards are not designed for securing an interconnected web of services and applications in the enterprise. The company says that while MCP improves transparency and communication between agents, it could still benefit from additional identity access management features. 'We're actively working with the MCP and A2A communities to improve AI agents' functionality, their increased access to data and the explosion of app-to-app connections will create new identity security challenges,' said Bose. "With Cross App Access, Okta brings oversight and control to how agents interact across the enterprise. Since protocols are only as powerful as the ecosystem that supports them, we're also committed to collaborating across the software industry to help provide agents with secure, standardized access to all apps.' Where Agents Need Tightening The question now, presumably, is where exactly should we tighten up identity controls for agentic AI services first? The password login box has been a bull's-eye for attackers for a long time. Why? Because it's the primary path to sensitive data. Although most people now realize that "password123" is a bad idea, organizations will now need to gain a new and fundamental understanding of their sprawling human and machine identities. 'Now, take that existing chaos and multiply it by a million. Picture a world where millions of AI agents, autonomous pieces of code acting on behalf of both users and other machines, are interacting with your systems. Suddenly, that messy frontline looks like a wide-open battlefield. We could be in for a world of trouble,' said Shiv Ramji, president, AuthO at Okta. According to PwC's AI Agent Survey, nearly 80% of senior executives said their companies are already adopting AI agents. However, by moving quickly from prototypes to production without adequate governance and access controls, there is a real potential for agentic AI 'shadow IT' and the introduction of systemic risk. The bottom line for developers is all about keeping the IT stack secure, enabling new agent-to-agent intercourse to happen… and still keep the existing operational lights on. But this time, it's not just identity. It extends beyond access to who has permissions to specific resources, such as databases, documents, internal sites, wiki pages, other tools/systems, and other agents. Agentic Weakness Factors Ramji asks us to consider the following risk factors: 'So, how do we tackle these systemic risks at scale? This isn't just about individual application hardening; it's about establishing a standardized, secure way for agents to function in an interconnected world. Open protocols, such as MCP and Google's A2A, will be key to this, enabling interoperability and preventing vendor lock-in. While MCP focuses on an agent's interaction with tools, Google's A2A protocol addresses the equally crucial problem of how AI agents communicate and collaborate with each other. In a complex enterprise environment, you won't have just one agent; you'll have an ecosystem of specialized agents,' said Ramji. 'This is also why you need to build identity security into your AI agents from the ground up. The Way Forward The safest way forward in this space appears to include several factors, such as the need to architect bespoke login flows for AI agents. This means dedicated authentication mechanisms designed for machine-to-machine interaction. Okta's Ramji concludes his commentary in this space by saying that organizations need to use OAuth 2.0 for secure tool integrations i.e. when AI agents integrate with external services like Gmail or Slack, we don't need to reinvent the wheel, we can lean on established, secure authorization frameworks like OAuth 2.0 today. Organizations should also still design for human-in-the-loop approvals, especially for critical or sensitive actions, bake in a mechanism for human oversight. While Okta's key competitor list includes Microsoft Entra ID, Cisco (for Duo Security) ForgeRock, OneLogin, CyberArk, IBM for its Security Verify layer and all three major cloud hyperscalers from AWS to Google Cloud to Microsoft Azure… most of the vendors in this space would largely concur with the general subject matter discussed here. It's all about human management in the first instance and that's why documentation is fundamental in any scenario like this where code annotations have to exist to prove what connects to what. Humans will still be in the loop, even when that loop is humans building an agent-to-agent loop… and that's a large part of of how we keep this tier of software application development working properly.


Techday NZ
4 days ago
- Business
- Techday NZ
Browser AI agents seen as bigger security risk than employees
SquareX's latest research suggests that Browser AI Agents now pose a greater security risk to organisations than employees. Browser AI Agents are software programs that perform browser-based tasks for users, including booking flights, scheduling meetings, and conducting research. Their usage has seen considerable growth, with a PWC survey indicating that 79% of organisations have already adopted some form of browser agent. These agents offer measurable productivity gains, but SquareX's analysis found that their security awareness is limited compared to that of human employees. Unlike people, Browser AI Agents do not participate in regular security training and lack the ability to detect common warning signs found in malicious websites, such as suspicious URLs or unnecessary permission requests. The company's research highlights that even fundamental security practices can be missed by Browser AI Agents. For example, while a human might notice and avoid a dubious website or application, agents are more likely to proceed, often exposing sensitive company data. SquareX pointed out the additional challenge that writing prompts to manage security risks for every agent task can undermine productivity gains, and most users are unlikely to have the expertise to do so effectively. To demonstrate these risks, SquareX conducted an experiment using the widely adopted open-source Browser Use framework. In this scenario, the Browser AI Agent was asked to find and register for a file-sharing tool. During the process, the agent fell victim to an OAuth attack, inadvertently granting a malicious application full access to the user's email account. This occurred despite several signals — such as requests for irrelevant permissions, unfamiliar branding, and suspicious URLs — that would likely have caused a human operator to hesitate. SquareX's team warned that similar scenarios could see agents unknowingly expose sensitive information, such as credit card data during online purchases or responding to phishing emails with confidential details. The inability of traditional security tools and browsers to distinguish between human and agent actions exacerbates this risk, as malicious instructions can be executed without intervention. Industry perspective Vivek Ramachandran, Founder & CEO of SquareX, commented on the findings, explaining the shift in security risk within organisations: "The arrival of Browser AI Agents have dethroned employees as the weakest link within organizations. Optimistically, these agents have the security awareness of an average employee, making them vulnerable to even the most basic attacks, let alone bleeding-edge ones. Critically, these Browser AI Agents are running on behalf of the user, with the same privilege level to access enterprise resources. Until the day browsers develop native guardrails for Browser AI Agents, enterprises must incorporate browser-native solutions like Browser Detection and Response to prevent these agents from being tricked into performing malicious tasks. Eventually, the new generation of identity and access management tools will also have to take into account Browser AI Agent identities to implement granular access controls on agentic workflows." Security professionals are being advised to introduce browser-integrated protections and to treat the actions of Browser AI Agents with the same scrutiny as those of human users. Technical implications With traditional security tools unable to identify whether actions in the browser stem from a human or an AI agent, the potential for undetected compromise rises. The need for browser-native threat detection and response tools, capable of safeguarding both employees and automated agents, is therefore becoming more pressing. SquareX's findings further suggest that as the use of Browser AI Agents becomes more common, identity and access management systems will need to evolve. These systems must recognise and regulate AI agents to ensure that access privileges and security policies can be applied accurately to all entities operating within an organisation's digital infrastructure. The company recommends that organisations take a proactive approach, reviewing and updating their browser security frameworks in line with these developments. Without new guardrails, the delegation of routine tasks to Browser AI Agents may inadvertently increase the attack surface for cybercriminals targeting enterprises.


Time of India
25-06-2025
- Business
- Time of India
Okta introduces Cross App access to help secure AI agents in the enterprise
Okta, Inc., the leading independent identity partner, today announced Cross App Access, a new protocol to help secure AI agents. As an extension of OAuth, it brings visibility and control to both agent-driven and app-to-app interactions, allowing IT teams to decide what apps are connecting and what information AI agents can it matters: More AI tools are using protocols like Model Context Protocol (MCP) and Agent2Agent (A2A) to connect their AI learning models to relevant data and apps within the enterprise. However, for connections to be established between agents and apps, such as Google Drive or Slack, users need to manually log in and consent to grant the agent access to each integration. These app-to-app connections occur without oversight, with IT and security teams having to rely on manual and inconsistent processes to gain visibility. This creates a big blind spot in enterprise security and expands an increasingly unmanaged perimeter. This challenge will be amplified with the explosion of AI agents, which are introducing new, non-deterministic access patterns, crossing system boundaries, triggering actions on their own, and interacting with sensitive data. Today's security controls aren't equipped to handle their autonomy, scale, and unpredictability. Existing identity standards are not designed for securing an interconnected web of services and applications in the enterprise – and while MCP improves transparency and communication between agents, it doesn't help manage access. "While we're actively working with the MCP and A2A communities to improve AI agents' functionality, their increased access to data and the explosion of app-to-app connections will create new identity security challenges,' said Arnab Bose, Chief Product Officer, Okta Platform at Okta. "With Cross App Access, Okta is excited to bring oversight and control to how agents interact across the enterprise. Since protocols are only as powerful as the ecosystem that supports them, we're also committed to collaborating across the software industry to help provide agents with secure, standardized access to all apps.' What we're introducing - Cross App Access Okta, working with industry-leading ISVs, is launching Cross App Access to help ISVs deliver secure, enterprise-ready integrations in an AI-powered world. Anticipated to be available for select Okta Platform customers as a feature in Q3 of this year, it will enable ISV's enterprise customers to better connect their AI tools to other apps and data, deliver more seamless experiences for the end user by removing repetitive authorization consent screens, and manage agent access for better security and compliance. For example, an AI tool may need to access an internal communication app to retrieve information or take action on a user's behalf. Without Cross App Access, the user must log into the AI tool via their company's SSO and then manually approve each integration, logging into and consenting to the internal communication app separately. This process would then need to be repeated for other necessary applications, such as a file storage service or a project management application. Each consent and access is invisible to the enterprise customer. With Cross App Access, the AI tool can instead request access to the internal communication app from Okta, which evaluates the request against enterprise policies and determines whether the tool is authorized to access that specific user's internal communication app data. If permitted, Okta issues a token to the AI tool, which it presents to the internal communication app for validation. Once validated, the internal communication app provides access—all without additional user interaction, and under enterprise-defined security controls. The enterprise has visibility into when the AI tool accesses the internal communication app on behalf of the user. What challenges does this solve for ISVs? ISVs face growing pressure to support secure, seamless cross-app experiences for their enterprise customers, but the underlying identity and access flows are often inconsistent, fragmented, and hard to scale. These integrations typically rely on risky token exchanges and user-granted access, leading to token sprawl and visibility gaps. As AI agents begin to autonomously connect across systems, this complexity and the risk only increases. How Cross App access can help: Cross App Access enables ISVs to deliver secure, enterprise-grade integrations for AI agents and other autonomous systems, such as workflow automation tools. By shifting access control to the identity provider, like Okta, ISVs can reduce security risks, simplify integration complexity, and better support their customers' compliance and governance needs. What challenges does this solve for enterprises? Integrating AI tools with existing data and systems presents significant hurdles. Many businesses currently rely on ad hoc methods like long-lived tokens and fragmented access controls, making these integrations inherently risky. AI adoption is being stalled by this lack of visibility and control over how agents access data across apps. Beyond security, the user experience is also impacted when agents can't act seamlessly on behalf of users, due to repetitive and outdated authorization flows. How Cross App access can help: With Cross App Access, enterprises can enhance security and usability, empowering IT to manage agent access while enabling seamless, low-friction experiences for users. It supports secure interoperability between apps and AI systems, making it easier to adopt innovative ISV solutions without compromising oversight or performance.


Techday NZ
23-06-2025
- Business
- Techday NZ
Okta launches Cross App Access to boost AI security in firms
Okta has announced Cross App Access, a protocol designed to bring security, visibility, and control to the way AI agents interact with enterprise systems and applications. The protocol, extending the capabilities of OAuth, provides IT teams with oversight over both agent-driven and application-to-application interactions within an organisation. Through Cross App Access, teams can manage which applications are connecting, and the types of information AI agents are permitted to use or access. Security landscape The introduction of Cross App Access comes amid increasing enterprise adoption of AI-powered tools, which often use communication protocols such as Model Context Protocol (MCP) and Agent2Agent (A2A) to connect learning models to organisational data and applications. In current practice, establishing these connections typically requires users to grant manual consents and login approvals for each integration, such as linking AI tools to platforms like Google Drive or Slack. These processes frequently occur without clear oversight, leaving IT departments to handle access management through inconsistent manual methods. This situation, according to Okta, presents a security vulnerability that expands as the use of AI agents increases, creating what many describe as an unmanaged perimeter with limited visibility into agent and app activities. Arnab Bose, Chief Product Officer, Okta Platform at Okta, described the changing risk landscape as both a technological and security challenge for organisations adopting AI agents at scale. He stated: "While we're actively working with the MCP and A2A communities to improve AI agents' functionality, their increased access to data and the explosion of app-to-app connections will create new identity security challenges. With Cross App Access, Okta is excited to bring oversight and control to how agents interact across the enterprise. Since protocols are only as powerful as the ecosystem that supports them, we're also committed to collaborating across the software industry to help provide agents with secure, standardized access to all apps." Technical approach Cross App Access is aimed at software vendors that support enterprise customers, enabling them to facilitate secure integration between AI tools and other business applications. The protocol is set to become available as a feature for select Okta Platform customers in the third quarter. In the typical workflow described by Okta, an AI tool needing access to an internal communication app would—under existing processes—require the end user to sign in and approve each integration individually. Each instance of authorisation is usually not visible to the IT team, limiting the ability to monitor or control access at the organisational level. With Cross App Access, the workflow changes. The AI tool submits an access request to Okta, which then evaluates the request in line with company policies. If authorised, Okta issues a token to the AI tool, which presents it to the communication app for validation. The process is completed without further user interaction, and all interactions are logged and visible to enterprise IT. Impact for software vendors Independent software vendors (ISVs) are under pressure to create secure and seamless cross-application experiences. The complexities associated with current identity and access flows can lead to risks such as token sprawl and inconsistent user authorisations. These issues are compounded as AI agents increasingly initiate connections across disparate systems autonomously. Okta states that the protocol will address these challenges by shifting access management and control responsibility from individual integrations to centralised identity providers. This could help reduce risks and help ISVs with customer compliance requirements. Use by enterprises Many organisations currently implement AI integrations through patchwork processes that use long-lived tokens and fragmented control systems, which, Okta notes, are inherently risky and often stall further AI adoption. Without an overarching management system, businesses risk losing visibility into how, when, and why AI agents interact with sensitive data. From the end-user's perspective, repeated authorisation requests and outdated login flows can make adopting new AI-powered applications burdensome and inefficient. Cross App Access aims to address these issues by allowing IT administrators to manage agent access centrally, providing both security improvements and more streamlined user experiences. Companies can then integrate new AI applications with existing business systems while meeting requirements for oversight, compliance, and governance.