logo
#

Latest news with #A2A

Line Up And Identify Yourselves: AI Agents Get Organized With NANDA
Line Up And Identify Yourselves: AI Agents Get Organized With NANDA

Forbes

time08-07-2025

  • Business
  • Forbes

Line Up And Identify Yourselves: AI Agents Get Organized With NANDA

Earth (focus on Europe) represented by little dots, binary code and lines - big data, global ... More business, cryptocurrency 3D render Getting in on the ground floor of a big tech transformation means covering some of the previously unknown innovations that are not yet household names, but will be in the future. Case in point – we're now in the age of agentic AI. People are hearing more about AI agents. But not many of them know about something called NANDA. In fact, it's hard to Google this and get accurate results. You get a bunch of links to a nursing organization. The NANDA that's going to power the future of the global Internet actually came out of MIT, and it's mostly only a known quantity (if you'll pardon the nod to quantum) among data scientists and people in other similar roles. But it's probably going to be a major influence on our technology in just a few years. What is NANDA? NANDA is essentially a system to provide a full platform for agent interactions. It's a protocol for a new AI Internet that's modified and evolved to handle the capabilities of LLMs. One of the most prominent writers on NANDA, Rahul Shah, describes it as a 'full stack protocol' where agents have cryptographic identities – we'll get back to that in a minute. 'NANDA does not replace A2A or MCP,' Shah writes, citing Agent to Agent protocols and the Model Context Protocol that has arisen to handle what you might call the 'AI API race.' 'Instead, it provides the naming, verification, and economic backbone that allows agents to function in real-world, distributed environments — securely, scalably, and autonomously. The goal is to enable a self-sustaining ecosystem, where useful agents are rewarded and trusted — while spammy or malicious agents can be excluded based on cryptographic audit trails.' In terms of platform features, there's an agent registry, and the system uses dynamic resolution logic to provide routing for agent transactions. There's also auditing, and distributed ledger technology, where NANDA uses zero-knowledge proofs to verify what agents do. But all of this is kind of a high-flown way to describe what NANDA is. Think of it a different way that's more intuitive and has to do with how AI agents resemble people. AI Agents Line Up to be Counted In so many ways, the idea of the AI agent is like a digital twin of a person – in other words, we view these agents as having those cognitive abilities that individual people have. We can even give them names and avatars, and make them seem very human indeed. They can pass all kinds of Turing tests. They are discrete entities. They're like people. If you take that metaphor further, NANDA is a protocol that's sort of like an organizational system for people. At a company, you have an org chart. If you're choosing teams for softball, you have a roster or a list of names. A teacher in a classroom has some kind of document to identify each student. This is the kind of thing that NANDA develops and orchestrates. It's a system for these AI agents to be known and understood – in effect, you're asking: 'who are they? And what do they do?' All of this takes place in the context of multi-agent systems where AI agents are working together to create solutions. More on NANDA I sat through a panel on AI at IIA, where some of the foremost people in this field talked about NANDA and everything around it. My colleague Ramesh Raskar characterized this as using the 'building blocks' for new agentic systems. Investor Dave Blundin mentioned a 'litany of useful functions' and a need for a system of micropayments for services. 'When this happened on the internet, nobody could figure out the revenue model, and then it all moved to ad revenue, because it's just: 'throw some banner ads on it, and throw it out there,'' he said. 'That's not going to work with AI agents. You don't want these things marketing (to people).' Aditya Challapally mentioned three big risks inherent in building these systems: trust, culture and orchestration. 'When we say culture, we mean things like: 'what are the societal standards for how an agent can interact with you?' (for example) can an agent DM you on LinkedIn, on behalf of another person, or do they have to say they're an agent, or something like this, … establishing that sense of culture. And then the third piece of this is orchestration, which is … how do agents talk to each other from a more (organized) protocol setup?' Panelist Anil Sharma spoke to a kind of wish list for the new protocol. 'I would like to see application sustainability,' he said. 'I would like to see this in social impact, in areas such as agriculture and other places … because this is where the data and value is locked across ecosystems, beyond enterprises into non-profit and government (systems).' And panelist Anna Kazlauskas talked about the necessity of data ownership. 'You can imagine, a couple of years out, you've got an AI agent, I picture 10 AI agents, that can go and autonomously do work, and maybe even earn on (a user's) behalf and collaborate with others,' she said. 'And I think one of the risks is that there's a single platform (for) all of those agents, right? And so I think especially as your AI agents start to produce real economic value, it's really important that you actually have kind of sovereignty and true ownership over that.' Blundin, in talking about the 'unbundling' of services, mentioned a related concern: that AI could build services more efficiently than companies, putting companies on their toes, enabled by a protocol like NANDA. That's a bit more of a window into how NANDA will work, and what it is supposed to do. Coming Soon So, although you haven't heard much about NANDA yet, you're going to. I thought it was helpful to provide that metaphor to show the various ways in which new protocols will treat AI agents like people – giving them names, identities, jobs, roles, and more, as they collaborate and work together, hopefully on our behalf, and to our benefit.

Okta Tightens Agent Identities For Machine-To-Machine Connections
Okta Tightens Agent Identities For Machine-To-Machine Connections

Forbes

time02-07-2025

  • Business
  • Forbes

Okta Tightens Agent Identities For Machine-To-Machine Connections

Black and white cybernetic robot hands pointing at each other When will full autonomy happen? It's the question tabled at every technology vendor meeting these days. The IT industry is frantically building agentic AI services and everybody wants a stake at the table. With various CEOs (including those from Salesforce and Microsoft) both claiming to now hand over somewhere between 30 to 50 percent of work to AI services, the degree to which agents now start talking to agents is of great importance. Human, Out Of The Loop? Until recently, technology advocates and evangelists were fond of mentioning the human-in-the-loop (and human handoff) element when talking about emerging AI services. It was a sort of appropriate lip service that needed mentioning, just to calm the people who worry about the robots taking over. A lot of that has changed and of course Google underlined the trend this April with the introduction of the A2A agentic communications standard. '[We have launched] a new, open protocol called Agent2Agent, with support and contributions from more than 50 technology partners. The A2A protocol will allow AI agents to communicate with each other, securely exchange information and coordinate actions on top of various enterprise platforms or applications. We believe the A2A framework will add significant value for customers, whose AI agents will now be able to work across their entire enterprise application estates,' noted the Google for Developers blog. But where are humans in the loop now? Speaking at a press gathering in London this week, Nutanix CEO Rajiv Ramaswami acknowledged the forthcoming inevitability of agentic intercommunication and said that his firm is working to provide as broad a scope of cloud infrastructure as possible to enable the new (and next) age of AI with simpler (if not pleasingly invisible) cloud services. Acknowledging that the infrastructure comes first… and then agentic identity management comes as a subsequent tier (for which Nutanix itself will look to collaborate with its now significantly expanded partner ecosystem, which has swelled in the wake of VMware's move to Broadcom), Ramaswami called for an understanding into how, when, why and where we weave this new fabric of intelligence. Identity Steps Up If it is time for hardcore identity players to come forward, then identity platform company Okta would arguably rank in the 'usual suspect' lineup in this space. This summer, the company introduced Cross App Access, a new protocol to help secure AI agents. As an extension of the open standard OAuth (technology that provides authorization controls to grant third-party applications access to other resources), Okta says its new services bring control to both agent-driven and app-to-app interactions. In short, it allows developers and data scientists to decide what apps are connecting to what… and what information AI agents can actually access. According to Arnab Bose, chief product officer for Okta platform, more AI tools are using technologes like Model Context Protocol and A2A to connect their AI learning models to relevant data and applications within the enterprise. However, for connections to be established between agents and apps themselves (think about Google Drive or Slack as good examples of applications that an agent might want access to) users need to manually log in and consent to grant the agent access to each integration. Amplified Agentic Explosion Bose says that despite this truth, app-to-app connections occur without oversight, with IT and security teams having to rely on manual and inconsistent processes to gain visibility. This creates a big blind spot in enterprise security and expands an increasingly unmanaged perimeter. This challenge, he says, will be amplified with the explosion of AI agents, which are introducing new, non-deterministic access patterns, crossing system boundaries, triggering actions on their own and interacting with sensitive data. The position at Okta is that 'today's security controls aren't equipped to handle their autonomy, scale and unpredictability' and that existing identity standards are not designed for securing an interconnected web of services and applications in the enterprise. The company says that while MCP improves transparency and communication between agents, it could still benefit from additional identity access management features. 'We're actively working with the MCP and A2A communities to improve AI agents' functionality, their increased access to data and the explosion of app-to-app connections will create new identity security challenges,' said Bose. "With Cross App Access, Okta brings oversight and control to how agents interact across the enterprise. Since protocols are only as powerful as the ecosystem that supports them, we're also committed to collaborating across the software industry to help provide agents with secure, standardized access to all apps.' Where Agents Need Tightening The question now, presumably, is where exactly should we tighten up identity controls for agentic AI services first? The password login box has been a bull's-eye for attackers for a long time. Why? Because it's the primary path to sensitive data. Although most people now realize that "password123" is a bad idea, organizations will now need to gain a new and fundamental understanding of their sprawling human and machine identities. 'Now, take that existing chaos and multiply it by a million. Picture a world where millions of AI agents, autonomous pieces of code acting on behalf of both users and other machines, are interacting with your systems. Suddenly, that messy frontline looks like a wide-open battlefield. We could be in for a world of trouble,' said Shiv Ramji, president, AuthO at Okta. According to PwC's AI Agent Survey, nearly 80% of senior executives said their companies are already adopting AI agents. However, by moving quickly from prototypes to production without adequate governance and access controls, there is a real potential for agentic AI 'shadow IT' and the introduction of systemic risk. The bottom line for developers is all about keeping the IT stack secure, enabling new agent-to-agent intercourse to happen… and still keep the existing operational lights on. But this time, it's not just identity. It extends beyond access to who has permissions to specific resources, such as databases, documents, internal sites, wiki pages, other tools/systems, and other agents. Agentic Weakness Factors Ramji asks us to consider the following risk factors: 'So, how do we tackle these systemic risks at scale? This isn't just about individual application hardening; it's about establishing a standardized, secure way for agents to function in an interconnected world. Open protocols, such as MCP and Google's A2A, will be key to this, enabling interoperability and preventing vendor lock-in. While MCP focuses on an agent's interaction with tools, Google's A2A protocol addresses the equally crucial problem of how AI agents communicate and collaborate with each other. In a complex enterprise environment, you won't have just one agent; you'll have an ecosystem of specialized agents,' said Ramji. 'This is also why you need to build identity security into your AI agents from the ground up. The Way Forward The safest way forward in this space appears to include several factors, such as the need to architect bespoke login flows for AI agents. This means dedicated authentication mechanisms designed for machine-to-machine interaction. Okta's Ramji concludes his commentary in this space by saying that organizations need to use OAuth 2.0 for secure tool integrations i.e. when AI agents integrate with external services like Gmail or Slack, we don't need to reinvent the wheel, we can lean on established, secure authorization frameworks like OAuth 2.0 today. Organizations should also still design for human-in-the-loop approvals, especially for critical or sensitive actions, bake in a mechanism for human oversight. While Okta's key competitor list includes Microsoft Entra ID, Cisco (for Duo Security) ForgeRock, OneLogin, CyberArk, IBM for its Security Verify layer and all three major cloud hyperscalers from AWS to Google Cloud to Microsoft Azure… most of the vendors in this space would largely concur with the general subject matter discussed here. It's all about human management in the first instance and that's why documentation is fundamental in any scenario like this where code annotations have to exist to prove what connects to what. Humans will still be in the loop, even when that loop is humans building an agent-to-agent loop… and that's a large part of of how we keep this tier of software application development working properly.

Google Entrusts A2A AI Framework to Linux Foundation
Google Entrusts A2A AI Framework to Linux Foundation

Arabian Post

time30-06-2025

  • Business
  • Arabian Post

Google Entrusts A2A AI Framework to Linux Foundation

Google has transferred ownership of its Agent2Agent protocol—including its specification, developer SDKs and tooling—to the Linux Foundation, ushering in a new era of open, vendor-neutral collaboration on AI agent interoperability. Announced on 23 June at the Open Source Summit North America, the move positions more than 100 organisations, including AWS, Cisco, Microsoft, Salesforce, SAP and ServiceNow, to jointly steward and evolve the protocol under a neutral governance framework. A2A, first introduced by Google in April 2025, establishes an open standard enabling autonomous AI agents to discover peers, exchange secure information and coordinate multi-step tasks across different platforms. Firms such as AWS and Cisco have already integrated or plan to integrate A2A into key components like directory services, identity, messaging and observability frameworks. Google's motivation for migrating A2A to the Linux Foundation stems from concerns over fragmentation and vendor lock-in in enterprise AI ecosystems. A neutral, open-governance structure, the announcement explains, will accelerate adoption, encourage wider contributions and maintain long-term stewardship of the protocol. ADVERTISEMENT Linux Foundation Executive Director Jim Zemlin emphasised the importance of neutrality, stating that hosting A2A ensures long-term collaboration and unbiased governance necessary to unlock agent‑to‑agent productivity. Google Cloud's Rao Surapaneni further described A2A as a 'vital open standard' that enables interoperable AI frameworks across platforms. The initiative has drawn support from major tech providers. AWS's Swami Sivasubramanian pledged contributions to the protocol and its agentic ecosystem, while Cisco's Vijoy Pandey underlined A2A's role in building an 'interoperable Internet of Agents' via integrations with open-source components. Microsoft, Salesforce, SAP and ServiceNow echoed these endorsements, with commitments to incorporate the protocol within their enterprise-grade AI offerings. The migration also signals a broader effort within the AI community to embrace open standards. While organisations such as Anthropic with its Model Context Protocol focus on connecting agents to tools and data, A2A complements by enabling agent-to-agent coordination. Mike Smith of Google noted at the summit that the protocol has been revised to allow flexible extensions and improved agent identity frameworks. Analysts predict that establishing robust standards for AI agent interoperability could pave the way for more complex, multi-agent workflows in enterprise systems. A report from Futurum Group forecasts that agent-driven automation could generate around $6 trillion of economic value by 2028, though experts caution governance and security frameworks must evolve in parallel. Academic scrutiny, however, highlights lingering security and privacy concerns. A May 2025 paper on arXiv emphasised the need for enhancements such as short‑lived tokens, consent‑driven exchanges, and tighter control mechanisms to safeguard sensitive data flows between agents. Another study from April provided a comprehensive analysis of secure implementations, recommending proactive threat modelling and structured identity governance to fortify A2A deployments. Under the Linux Foundation, A2A will benefit from established intellectual property frameworks, transparent technical working groups and community-driven decision processes, according to the Linux Foundation's press materials. The governance roadmap includes exploring standards around trustworthy identity, delegated authority, policy controls and reputational attributes that could underpin a mature, interoperable ecosystem. The protocol's practical-ready toolkit, including Python and TypeScript implementations, has already been shared via GitHub to accelerate developer engagement. The open-source community is invited to contribute, with growing participation from systems integrators, enterprise vendors and independent developers. Enterprise adoption is expected to advance steadily as major cloud and systems providers thread A2A into their AI platforms. Use cases include orchestrating task-specific agents—for example, a procurement assistant triggering financial audit agents, or compliance bots coordinating with legal review agents—without proprietary lock‑in. Nonetheless, challenges remain. Multi-stakeholder governance could slow decision cycles, and competing priorities may hamper swift feature roll-out. Yet proponents argue that the foundational benefits of open, interoperable agent ecosystems outweigh such trade‑offs in the long term. The real test will come in adoption: how effectively Linux Foundation‑hosted governance can shepherd A2A from ambitious standard to enterprise‑grade infrastructure underpinning next‑gen AI workflows.

Mastercard unveils UK A2A instant payments sandbox
Mastercard unveils UK A2A instant payments sandbox

Yahoo

time27-06-2025

  • Business
  • Yahoo

Mastercard unveils UK A2A instant payments sandbox

Mastercard is set to provide UK banks and financial institutions with a groundbreaking platform to innovate and test account-to-account (A2A) instant payments technology. Later this year, the company will open access to its fifth-generation A2A instant payments sandbox, designed to foster collaboration and modernise the UK's payment ecosystem. The sandbox environment will enable experimentation with new payment flows, including retail and digital assets, across various use cases. It aims to support the development of advanced services such as the "5-leg credit transfer" with instant payment confirmation, enhancing merchant and consumer payment options. Adhering to the international ISO20022 standards, the sandbox will offer significant improvements in transaction data richness. This advancement is expected to boost fraud detection capabilities and pave the way for future innovations in the payment sector. The UK government's National Payments Vision (NPV), published at the end of last year, sets ambitious goals for the payment sector's contribution to economic growth. Mastercard's A2A sandbox aligns with this vision, providing a platform for banks and fintechs to prepare for the next phase of the UK's payment evolution. An EY report from March 2025 highlighted the potential for a £9bn annual uplift to the UK's GDP through the modernisation of A2A infrastructure. The sandbox represents a critical step in unlocking this growth, serving as a testing ground for new infrastructure capabilities and products. Mastercard's Next Generation A2A Instant Payment platform underpins the sandbox, offering a cloud-ready solution with user-friendly front-end tools and a developer portal, as well as robust back-end functionality with API access for easy integration. Mastercard Real Time Payments executive vice president Peter Reynolds said: 'Account-to-account payments in the UK are already an enormous part of the UK's financial landscape. The Mastercard A2A instant payments Sandbox opens our innovative technology to our partners to develop and test new potential services. Alongside the UK government's National Payments Vision, we're setting out a bold vision of the future in A2A real-time payments.' The sandbox was showcased at UK Finance's Digital Innovation Summit on 24 June, emphasising its role in accelerating retail payments. In September 2024, Mastercard updated its Consumer Fraud Risk (CFR) solution to increase the ways it helps protect consumers from Real Time Payment scams. The AI-powered insights give more UK banks greater visibility into potentially fraudulent transactions so they can stop scams before they take place. "Mastercard unveils UK A2A instant payments sandbox" was originally created and published by Retail Banker International, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

HCLTech reinforces its leadership in multi-agent AI innovation
HCLTech reinforces its leadership in multi-agent AI innovation

Business Standard

time26-06-2025

  • Business
  • Business Standard

HCLTech reinforces its leadership in multi-agent AI innovation

HCLTech supports rapid adoption of enterprise-grade agentic workflows that unify task orchestration, reasoning and action execution across systems Capital Market Expands partnership with Salesforce to accelerate adoption of Agentic AI with new service HCLTech announced the launch of its orchestration consultation and implementation services designed to accelerate enterprises' adoption of Salesforce Agentforce across various industries, including financial services, healthcare, retail and manufacturing, helping them transform into AI-augmented enterprises. With a consulting-led approach, HCLTech supports rapid adoption of enterprise-grade agentic workflows that unify task orchestration, reasoning and action execution across systems, streamlining marketing, sales, service and operations in highly regulated environments. HCLTech leverages protocols, including the Agent-to-Agent (A2A) protocol and the Model Context Protocol (MCP), to help clients streamline coordination, task tracking and visibility and drive faster and more reliable outcomes.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store