
The AI Coordination Revolution You Haven't Heard About Yet
Breakthroughs don't always arrive with a bang or a flash of inspiration.
Sometimes, they come in the form of quietly made connections, wires carefully laid between sparks we had already had.
The AI revolution has given us no shortage of sparks. So many, in fact, that it's easy to lose count. It started with large language models, unlocking the power of general-purpose text generation.
Just as we were getting comfortable with prompting, another shift appeared: AI agents, built not just to respond, but to take action on our behalf.
From customer service to code generation, we saw tools like LangChain and AutoGPT explode in use cases, with one company after another dashing to find use for their Agents.
But we quickly ran into a wall: these agents didn't talk to each other. They acted alone without coordination, like ships passing in the night.
Given how rapidly things have evolved with AI it would be a fool's errand to try to predict the future, but one thing is for sure: the next frontier in AI isn't going to be as simple as creating increasingly smart agents.
Instead, it will require unlocking their ability to collaborate, just like humans do.
And that's where agentic orchestration comes in.
Remember how just a few years ago autocomplete felt magical?
Now, instead of making guesses at the next word we have AI agents drafting our emails, optimizing our workflows, and handling customer engagements for us. This acceleration from prediction to action has been stunning, and it has happened faster than most people get promoted on the job.
Just like we've reached the limits of training data available for foundational model providers, we're also getting closer to reaching the limits of siloed agents.
Most enterprises now face a growing web of tools with the average enterprise having more than 360 SaaS applications deployed at any given time. Although each promises to be a part of the digital transformation, few work in concert to actually make life easier for clients.
The same holds for the agentic revolution as well, where the sum of the parts is becoming a morass clients can find difficult to move forward effectively in.
Enterprises are building agents on different platforms ranging from LangChain, Salesforce, Anthropic to OpenAI and the challenge isn't the quality of each, it's their lack of cohesion and ability to coordinate.
'Many of our clients come with agents,' says Matt Wood, who helps lead PwC's AI efforts.
"They are already seeing results, and next they want them organized. There's so much breadth in how clients work with these. Once we got structure in place, plans, playbooks, the ability to iterate, that's when we saw 10x improvement in how fast and well agents could be built.'
This is where agentic orchestration comes in.
The concept is exactly what it sounds like: creating the conditions for many agents to collaborate toward a shared goal. Instead of a single AI doing a single task, it's multiple AI systems coordinating across departments, domains, and decisions.
PwC's new agent OS is designed to do just that, and it is the first mover in a space that is likely to see many entrants in the near future.
What solutions like PwC's agent OS do is connect agents, regardless of their tech stack, into structured, adaptive workflows. Then, it lets them talk to each other, exchanging information about the task and goals, all the while the humans remain on the loop managing it all as necessary.
When done right, orchestration sets boundaries and limits on the AI, allowing them to focus, if not specialize almost as if they had read Adam Smith's example of the pin factory.
'We found that the more constrained the agents are, the better they perform,' says Wood.
'Agents do better when they are asked to go on deliberate expeditions with a clear goal instead of exploration. Once we shifted our mindset from building agents to orchestrating them, we built 250+ across every function. And more importantly, they worked together.'
The results aren't just theoretical.
As Paul Griggs, PwC's US Senior Partner, puts it: ' we're helping organizations scale AI agents with confidence, speed, and purpose and it's a real game-changer for how our clients' work gets done.'
Clearly there's great power in coordination between agents, and there's space for humans to pitch in too.
Much of the hype around AI is focused on the rather natural, albeit somewhat overblown, fears of AI replacing humans.
For those who are ready to welcome our robotic overlords and submit to a life of involuntary leisure, Paul offers reassurance that us humans still have our role to play. 'There's no question whether the future of work is in human-AI collaboration.'
To understand what the future of work will look like, it helps to understand the evolving relationship between humans and machines.
Overall, there will be three main modes of interacting with agentic AI:
These distinctions are more than academic, but getting the nuances across to an audience that is still coming to terms with AI is its own challenge.
'You see peoples' eyes gloss over when you use terms such as 'multi-agent orchestration,'' says IBM's Karl Haller.
'But if you show them how their Standard Operating Procedures (SOPs) can be turned into code, and the efficiency improvements that come from that, they get it pretty fast.'
Haller sees the real opportunity of orchestration in codifying business logic.
'The future of AI is where your Visio diagram becomes executable code. It's where the question, answer, and action are all linked. That's when an agent goes from being a tool to being a teammate.'
It's no wonder that every major firm seems to be building some flavor of orchestration layer.
But when you ask around, as analyst Usman Sheikh, Managing Director at High Output Ventures, seeing the full picture remains difficult.
'There's a flurry of announcements,' Sheikh tells us.
'But when I ask people at these firms to provide actual case studies or examples they are far and few between. The marketing is often ahead of the implementation.'
This insight brings us back to a key point in the AI revolution that many companies are still grappling with: trust.
'We get a lot of questions about agents and whether they can be trusted to do the work right,' says Matt Wood.
'Orchestration with the right guardrails and the knowledge sources connected is what makes the difference. That's what scales, because users can see that it's trustworthy instead of companies just marketing it as such.'
If an AI agent books your flight, handles your taxes, or flags a compliance issue, you need to know it's acting in alignment with enterprise standards.
That's why baking in governance and guardrails will be part-and-parcel in making any orchestration system work.
'We made a priority of governance with PwC's agent OS', Wood explains.
'It integrates with our risk frameworks, ensuring oversight without friction, and we've opened up the black box of AI as much as possible. Non-technical teams can use drag-and-drop interfaces, agents can be edited, updated, and monitored without a PhD.'
A single agent is a tool.
A fleet of agents, orchestrated with precision and aligned to outcomes, that's infrastructure.
And for organizations ready to move beyond the hype and into real AI transformation, the future is already arriving.
It's agentic. It's orchestrated.
And it's ready to push us humans consistently ahead of the curve.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Tyton Partners and Ufi Ventures Release Q2 2025 VocTech Market Report: AI Shockwaves, UK Industrial Strategy, and Transatlantic Divergence Take Centre Stage
LONDON, July 24, 2025 (GLOBE NEWSWIRE) -- Tyton Partners, the leading strategy consulting and investment banking firm focused on the education sector, and Ufi Ventures, the UK's specialist investor in vocational technology (VocTech), today released their Q2 2025 VocTech Market Report. This quarterly publication explores the trends shaping vocational learning and workforce development across the UK, Europe, and North America. The second quarter of 2025 has been marked by increasing anxiety around artificial intelligence's disruptive impact on labour markets, a wave of significant UK policy announcements, and early signs of capital rotation from the US to Europe amid political volatility. Vocational education and training remain firmly in the spotlight as policymakers and investors confront mounting challenges tied to youth disengagement, employment shifts, and rapid technological change. Key Takeaways Labour markets are causing concern, even in the US. The UK government made a series of major policy announcements, many of which see increased investment in key sectors and skills. The detail is important and not yet here. Big Tech companies – including 'hyperscalers' such as OpenAI – are muscling in to the education space, likely in search of long-term users and increased engagement. The future of junior white-collar workers, and how they should be trained, is a key focus of debate. Being conscious of what may have previously been taken for granted (informal 'learning by doing' in particular) looks important. Companies who facilitate AI-driven HR workflows are raising sizeable funding, with some European businesses closing unusually large €20m+ Series A rounds. Alongside UK reforms, policy developments in the US and Europe are creating new dynamics. Germany's coalition is advancing ambitious investment programmes. In the US, escalating attacks on higher education and the erratic policy environment under the Trump administration may be triggering a shift of capital and student interest to the UK and Europe. Helen Gironi, Director at Ufi Ventures, commented: 'AI is shaking up workforce development from every angle. Employers, policymakers and learners are all being forced to adapt. At Ufi Ventures, we see opportunity in this disruption, but only for those who are ready to innovate and act with clarity.' Nick Kind, Managing Director at Tyton Partners, added: 'We are seeing a critical turning point. AI is accelerating change, but it is also highlighting systemic gaps in skills and training. With new policy commitments in the UK and a capital environment in flux, the landscape is as complex as it is promising. This report offers grounded insight into how to respond.' To access the full Q2 2025 VocTech Market Report, visit: About Tyton Partners Tyton Partners is the leading provider of strategy consulting and investment banking services to the global knowledge and information services sector. With offices in Boston and New York City, the firm has an experienced team of bankers and consultants who deliver a unique spectrum of services from mergers and acquisitions and capital markets access to strategy development that helps companies, organizations, and investors navigate the complexities of the education, media, and information markets. Tyton Partners leverages a deep foundation of transactional and advisory experience and an unparalleled level of global relationships to make its clients' aspirations a reality and to catalyze innovation in the sector. Learn more at About Ufi Ventures Ufi Ventures is the investment arm of Ufi VocTech Trust. Ufi supports the adoption and deployment of technology to improve skills for work and deliver better outcomes for all. By leveraging its depth of experience Ufi Ventures supports its growing portfolio through access to capital, and its wide expert pool and network. Learn more at Media ContactZoe Wright-NeilDirector of Marketing and Business Developmentzwrightneil@ Partners

Miami Herald
5 hours ago
- Miami Herald
Sam Altman sends chilling message on cybersecurity practice
OpenAI CEO Sam Altman has a lot to worry about. The pressure for the ChatGPT 5 release everyone is anxiously awaiting must be enormous on its own. I expect this new version will be nothing more than all the previous models combined, plus a lot of duct tape and some "innovative" new features to make it seem less underwhelming. If that's the case, Altman has much to be concerned about. Related: Mark Cuban delivers 8-word truth bomb on AI wars Alas, his problems don't stop there. OpenAI is in the talent "poaching war" with Meta. To make matters worse, Mark Zuckerberg recently announced his plan for Meta to invest hundreds of billions of dollars into compute to build superintelligence. While OpenAI's situation regarding the functionality of the artificial intelligence models is favorable, Altman seems to be one step behind Zuckerberg's PR strategies, always having to respond to the latest move. OpenAI's latest revelation follows the same pattern. On July 22, Oracle (ORCL) and OpenAI shared that they are working on developing 4.5 GW of additional Stargate data center capacity in the U.S. The company didn't specify current capacity, but this extra work with Oracle will push it over 5GW. This recent development looks suspiciously like a response to Zuckerberg's revelation that Meta is building Hyperion, which will be able to scale up to 5GW over several years. Of course, Meta is not the only competitor Altman needs to consider. What is unique here is that he also needs to watch out for Microsoft, which is interestingly both a partner and competitor. But that's a long story for another time. Related: Meta makes a desperate-looking move, bites Apple Altman is also troubled by raising capital. OpenAI is seeking funding from new and existing investors as part of its $40 billion round announced in March, reported WIRED. A new version of the Sora model needs to be launched soon, too, as it looks like it has been overtaken by Google's Veo 3. So what else could a man with so many things on his plate be worried about? Whenever I hear an AI company CEO explaining AI's dangers, I wonder if it is like PR hype, because, let's face it, even negative marketing is marketing. In a surprising turn of events, though, Altman just said something that rings true. "A thing that terrifies me, is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else," reported Business Insider. "That is a crazy thing to still be doing," continued Altman, "AI has fully defeated most of the ways that people authenticate currently - other than passwords." More AI Stocks: Google plans major AI shift after Meta's surprising $14 billion moveMeta delivers eye-popping AI announcementVeteran trader surprises with Palantir price target and comments Biometric authentication has always seemed like the most illogical cybersecurity invention. You can replace your lock if you lose your keys and change passwords if they get stolen, but you can't replace your fingerprints, voice, or face (unless you are Madonna). I am glad someone with Altman's influence has finally spoken up. Related: Microsoft wants to help you live longer The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


Atlantic
6 hours ago
- Atlantic
Donald Trump's Gift to AI Companies
Earlier today, Donald Trump unveiled his administration's 'AI Action Plan'—a document that details, in 23 pages, the president's 'vision of global AI dominance' and offers a road map for America to achieve it. The upshot? AI companies such as OpenAI and Nvidia must be allowed to move as fast as they can. As the White House officials Michael Kratsios, David Sacks, and Marco Rubio wrote in the plan's introduction, 'Simply put, we need to 'Build, Baby, Build!'' The action plan is the direct result of an executive order, signed by Trump in the first week of his second term, that directed the federal government to produce a plan to 'enhance America's global AI dominance.' For months, the Trump administration solicited input from AI firms, civil-society groups, and everyday citizens. OpenAI, Anthropic, Meta, Google, and Microsoft issued extensive recommendations. The White House is clearly deferring to the private sector, which has close ties to the Trump administration. On his second day in office, Trump, along with OpenAI CEO Sam Altman, Oracle CEO Larry Ellison, and SoftBank CEO Masayoshi Son, announced the Stargate Project, a private venture that aims to build hundreds of billions of dollars worth of AI infrastructure in the United States. Top tech executives have made numerous visits to the White House and Mar-a-Lago, and Trump has reciprocated with praise. Kratsios, who advises the president on science and technology, used to work at Scale AI and, well before that, at Peter Thiel's investment firm. Sacks, the White House's AI and crypto czar, was an angel investor for Facebook, Palantir, and SpaceX. During today's speech about the AI Action Plan, Trump lauded several tech executives and investors, and credited the AI boom to 'the genius and creativity of Silicon Valley.' At times, the action plan itself comes across as marketing from the tech industry. It states that AI will augur 'an industrial revolution, an information revolution, and a renaissance—all at once.' And indeed, many companies were happy: 'Great work,' Kevin Weil, OpenAI's chief product officer, wrote on X of the AI Action Plan. 'Thank you President Trump,' wrote Collin McCune, the head of government affairs at the venture-capital firm Andreessen Horowitz. 'The White House AI Action Plan gets it right on infrastructure, federal adoption, and safety coordination,' Anthropic wrote on its X account. 'It reflects many policy aims core to Anthropic.' (The Atlantic and OpenAI have a corporate partnership.) In a sense, the action plan is a bet. AI is already changing a number of industries, including software engineering, and a number of scientific disciplines. Should AI end up producing incredible prosperity and new scientific discoveries, then the AI Action Plan may well get America there faster simply by removing any roadblocks and regulations, however sensible, that would slow the companies down. But should the technology prove to be a bubble—AI products remain error-prone, extremely expensive to build, and unproven in many business applications—the Trump administration is more rapidly pushing us toward the bust. Either way, the nation is in Silicon Valley's hands. The action plan has three major 'pillars': enhancing AI innovation, developing more AI infrastructure, and promoting American AI. To accomplish these goals, the administration will seek to strip away federal and state regulations on AI development while also making it easier and more financially viable to build data centers and energy infrastructure. Trump also signed executive orders to expedite permitting for AI projects and export American AI products abroad. The White House's specific ideas for removing what it describes as 'onerous regulations' and 'bureaucratic red tape' are sweeping. For instance, the AI Action Plan recommends that the federal government review Federal Trade Commission investigations or orders from the Biden administration that 'unduly burden AI innovation,' perhaps referencing investigations into potentially monopolistic AI investments and deceptive AI advertising. The document also suggests that federal agencies reduce AI-related funding to states with regulatory environments deemed unfriendly to AI. (For instance, a state might risk losing funding if it has a law that requires AI firms to open themselves up to extensive third-party audits of their technology.) As for the possible environmental tolls of AI development—the data centers chatbots run on consume huge amounts of water and electricity —the AI Action Plan waves them away. The road map suggests streamlining or reducing a number of environmental regulations, such as standards in the Clean Air Act and Clean Water Act—which would require evaluating pollution from AI infrastructure—in order to accelerate construction. Once the red tape is gone, the Trump administration wants to create a 'dynamic, 'try-first' culture for AI across American industry.' In other words, build and test out AI products first, and then determine if those products are actually helpful—or if they pose any risks. The plan outlines policies to encourage both private and public adoption of AI in a number of domains: scientific discovery, health care, agriculture, and basically any government service. In particular, the plan stresses, 'the United States must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence'—in line with how nearly every major AI firm has begun developing military offerings over the past year. Earlier this month, the Pentagon announced contracts worth up to $200 million each with OpenAI, Google, Anthropic, and xAI. All of this aligns rather neatly with the broader AI industry's goals. Companies want to build more energy infrastructure and data centers, deploy AI more widely, and fast-track innovation. Several of OpenAI's recommendations to the AI Action Plan—including 'categorical exclusions' from environmental policy for AI-infrastructure construction, limits on state regulations, widespread federal procurement of AI, and 'sandboxes' for start-ups to freely test AI—closely echo the final document. Also this week, Anthropic published a policy document titled 'Building AI in America' with very similar suggestions for building AI infrastructure, such as 'slashing red tape' and partnering with the private sector. Permitting reform and more investments in energy supply, keystones of the final plan, were also the central asks of Google and Microsoft. The regulations and safety concerns the AI Action Plan does highlight, although important, all dovetail with efforts that AI firms are already undertaking; there's nothing here that would seriously slow Silicon Valley down. Trump gestured toward other concessions to the AI industry in his speech. He specifically targeted intellectual-property laws, arguing that training AI models on copyrighted books and articles does not infringe upon copyright because the chatbots, like people, are simply learning from the content. This has been a major conflict in recent years, with more than 40 related lawsuits filed against AI companies since 2022. (The Atlantic is suing the AI company Cohere, for example.) If courts were to decide that training AI models with copyrighted material is against the law, it would be a major setback for AI companies. In their official recommendations for the AI Action Plan, OpenAI, Microsoft, and Google all requested a copyright exception, known as 'fair use,' for AI training. Based on his statements, Trump appears to strongly agree with this position, although the AI Action Plan itself does not reference copyright and AI training. Also sprinkled throughout the AI Action Plan are gestures toward some MAGA priorities. Notably, the policy states that the government will contract with only AI companies whose models are 'free from top-down ideological bias'—a reference to Sacks's crusade against 'woke' AI—and that a federal AI-risk-management framework should 'eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.' Trump signed a third executive order today that, in his words, will eliminate 'woke, Marxist lunacy' from AI models. The plan also notes that the U.S. 'must prevent the premature decommissioning of critical power generation resources,' likely a subtle nod to Trump's suggestion that coal is a good way to power data centers. Looming over the White House's AI agenda is the threat of Chinese technology getting ahead. The AI Action Plan repeatedly references the importance of staying ahead of Chinese AI firms, as did the president's speech: 'We will not allow any foreign nation to beat us; our nation will not live in a planet controlled by the algorithms of the adversaries,' Trump declared. The worry is that advanced AI models could give China economic, military, and diplomatic dominance over the world—a fear that OpenAI, Anthropic, Meta, and several other AI firms have added to. But whatever happens on the international stage, hundreds of millions of Americans will feel more and more of generative AI's influence—on salaries and schools, air quality and electricity costs, federal services and doctor's offices. AI companies have been granted a good chunk of their wish list; if anything, the industry is being told that it's not moving fast enough. Silicon Valley has been given permission to accelerate, and we're all along for the ride.