logo
#

Latest news with #StephenKlein

Agentic AI Beyond the Buzzwords: By Steve Wilcockson
Agentic AI Beyond the Buzzwords: By Steve Wilcockson

Finextra

time30-06-2025

  • Business
  • Finextra

Agentic AI Beyond the Buzzwords: By Steve Wilcockson

'I'll be honest' is a phrase that usually signals the opposite. So let me state upfront, I work in AI for a (mostly) FinTech vendor, and yes, we're all talking about agentic AI. Vendors are selling it, top tier consultants are excitedly selling that they can integrate it, and customers seem to be either cautiously curious or waiting for the hype to settle. My company has customers taking tentative steps with agents, while I was at a streaming analytics for financial services event where one CDAO was vehement that his bank didn't have the skills to adopt them. I'm frankly therefore making up my mind if I'm part of the early stage of a "game changing" zeitgeist or merely an echo chamber for shiny suits. Earlier this year, I mentioned agentic AI in a 2026 predictions blog. Since then, the noise has only grown louder. The analysts tend to love it. The leading analyst firm predicts that by 2028, 33% of enterprise software will include Agentic AI, with half of day-to-day decisions made autonomously. Capgemini pegs the market at $47 billion by 2030. Big numbers. Big change. But also, a lot of fluff. One experienced analyst at a private dinner said don't confuse your already confused audiences with agentic jargon - they're not ready for it. That's even with his firm selling agentic AI research. Even the leading analyst firm cited earlier have rolled back: 40% of agentic AI projects will be scrapped by 2027. Folks like Stephen Klein vociferiously argue contrarian opinions against the Eduardo Ordax-like "LinkedInfluencers." Let's cut through the noise, baseline the nomenclature at least if not real world implementations or consultant and vendor echo chambers. My opinion on which it is? No one really cares, but for those that do, I believe agentic AI is part of an emerging zeitgeist. What Is Agentic AI—Really? Agentic AI isn't just a rebranded chatbot or a glorified workflow. It's about systems that are proactive, not reactive. These agents don't just respond—they plan, decide, and adapt. They can break down complex tasks, choose tools, recover from errors, and adjust strategies based on feedback. They're not just executing instructions—they're reasoning through them. That's a far cry from early LLMs like ChatGPT, which simply answered questions. Today's agents can retrieve information, remember context, and orchestrate multi-step processes. But they're still not infallible. Even the most advanced frameworks make mistakes. That's why human oversight remains essential—at least for now. Workflows vs. Agents: Know the Difference Workflows are deterministic. They follow predefined paths and deliver consistent outcomes. They're great for structured, repeatable tasks—think document processing or data extraction. They can even include LLMs, but they don't adapt or learn. Agents and agentic systems, on the other hand, are dynamic. They decide what to do, when to do it, and how. They're ideal for messy, unpredictable problems, like assessing complex customer interactions or navigating multi-system processes. They're expensive, yes, but powerful where flexibility and autonomy are needed. Systems of Agents: The Real Deal A single agent is useful. A system of agents is transformative, at least on paper. Imagine an orchestrator agent coordinating a team: one chats with users, another generates visualizations, another monitors alerts. They share memory, follow rules, and adapt in real time. It's not human-level intelligence, but it's a step toward intelligent systems design. Think of it like managing 100 interns on a summer project. Each agent has a role, but the orchestrator keeps the big picture in view. It's not just tech—it's organizational design and mangement. I was at an AI For The Rest of US MeetUp recently in Shoreditch. There, a speaker who is implementing this stuff (in security and defense more so than financial services) went beyond the interns on a summer project analogy. He phrased his "systems of agents" (not agentic AI systems - he never used that phrase) as like managing and orchestrating teams of specialists. Enter MCP Servers To make all this work, agents need to talk to tools and the rest of us. That's where MCP (Multi-Agent Communication Protocol) comes in. Developed by Anthropic, the organization that gave us Claude, MCP gives agents a standard way to connect with tools, services, and data. It's not perfect—security and complexity concerns have been raised—but as a standard, it's a leap forward from hand-coded integrations, closer to a common denominator than a center of excellence. Other better standards that raise the bar may follow. But at the very least, it's a standard. Should they catch on, you may see MCP servers everywhere: vendors hosting them, customers orchestrating them, and agents using them to collaborate not just within but across organizations. When Should You Use Agentic AI? Use agents when: The task is open-ended or unpredictable. You need flexibility and adaptability. Multiple tools must be orchestrated dynamically. Human oversight is still required, but you want to scale. Avoid agents when: The task is simple, structured, and repeatable. Speed and cost efficiency are priorities. A traditional workflow or rule-based system will do. Financial Services Implications & Final Thoughts Agentic AI isn't magic. It's not AGI. But it's a meaningful evolution in how we build intelligent systems. The key is knowing when to use it—and when not to. Workflows matter! So too do people and decision-makers and complexity. As with any emerging tech, nuance matters more than noise. If it works, agents are a prime fit for key tasks. In Quant and capital markets, imagine time-series agents, backtesting agents, pricing agents, equity research agents, VaR calculation agents etc. For the middle office and compliance, agents will read reports ( or STR), alert, monitor, score, build graphs, assess counterparties, write reports, etc. When bundled together and orchestrated as systems of agents, or Agentic AI systems if you prefer that phrase, they'll augment, service and heavy lift fully fledged processes, like orchestrating equities or FX trading systems, derivatives risk management frameworks, compliance and validation improvements, and software development. But video didn't quite kill the radio star. It brought a ton more entertainment opportunities. Just look at this weekend's sparkling Glastonbury line-up and excitement! Nor did electronification destroy the financial services industry. Far from it, we can trade anything, anywhere, anytime (mostly), not just the FTSE or NYSE. The internet hasn't quite killed off daily newspapers, bringing us conspiracies and fake news as well as a gazillion and one perspectives. With agentic AI, we'll see, shiny suits or not! With thanks to (indebted to!) my Q colleagues Alex Arotsker and Bill Gilpin who inspired the technical summary, and the AI For The Rest Of Us crew for the fresh, real, engaging perspectives.

Executives Are Pouring Money Into AI. So Why Are They Saying It's Not Paying Off?
Executives Are Pouring Money Into AI. So Why Are They Saying It's Not Paying Off?

Yahoo

time25-05-2025

  • Business
  • Yahoo

Executives Are Pouring Money Into AI. So Why Are They Saying It's Not Paying Off?

A recent survey by tech giant IBM came to a conclusion that could send shockwaves through Wall Street and the tech sector writ large. The survey asked whether or not company leaders were reporting that their AI initiatives delivered the expected return on investment. A shockingly small proportion of the surveyed CEOs reported that the tech was delivering on its promises, with only a quarter of the 2,000 respondents answering in the affirmative, while only 16 percent scaled AI across the entire enterprise. A mere half of the CEO respondents indicated they were realizing value from generative AI investments, indicating the tech may be falling far short of some sky-high expectations and billions of dollars spent. Tech leaders have long rung the alarm bells about the dangers of fueling an AI bubble, investing in an unproven tech that's still suffering from widespread hallucinations and a propensity to leak potentially sensitive data. As AI models become more powerful, they're also becoming more prone to hallucinating, not less, highlighting that the industry is heading in the wrong direction. However, company executives are seemingly unperturbed. A whopping 85 percent of the CEOs IBM surveyed expected their investments in AI efficiency and cost savings to return a positive ROI by 2027. The general fear of being left behind by missing the boat on AI is still rampant. "At this point, leaders who aren't leveraging AI and their own data to move forward are making a conscious business decision not to compete," IBM vice chairman Gary Cohn wrote in the report. "As AI adoption accelerates, creating greater efficiency, and productivity gains, the ultimate pay-off will only come to CEOs with the courage to embrace risk as opportunity." But how to leverage AI meaningfully — and convey that vision to workers — is proving extremely difficult. According to a 2024 Gallup poll, only 15 percent of US employees felt that "their organization has communicated a clear AI strategy." Only 11 percent said they feel "very prepared" to work with AI, a drop of six percent from Gallup's 2023 survey. Despite pouring tens of billions of dollars into AI investments and supporting infrastructure expansions, companies are still many years out from turning a profit. When, or if, they'll ever get to the point where AI pays for itself remains to be seen. "Are we using GenAI to solve real problems, or just optimizing slide decks?" CEO Stephen Klein told Forbes. In a study commissioned by Microsoft last year, researchers claimed that for every $1 invested in generative AI, companies would realize an average of $3.70 in return, claims that were never externally validated. How long investors will be willing to prop up an enormous money-burning operation is anybody's guess. More on generative AI: The US Copyright Chief Was Fired After Raising Red Flags About AI Abuse

The $100 Billion Illusion - Why Data, Not Hype, Will Drive AI ROI
The $100 Billion Illusion - Why Data, Not Hype, Will Drive AI ROI

Forbes

time09-05-2025

  • Business
  • Forbes

The $100 Billion Illusion - Why Data, Not Hype, Will Drive AI ROI

AI Illusions It's tempting to believe we've entered the golden age of artificial intelligence. Headlines tout a $100 billion generative AI market by 2026. CEOs mention 'AI' on nearly every earnings call. Consultants pitch productivity revolutions via PowerPoint. But beneath the surface, a less convenient reality is setting in: for most companies, GenAI is still an expensive experiment—not a source of revenue. Stephen Klein, CEO, asks, 'Are we using GenAI to solve real problems, or just optimizing slide decks?' Klien is bullish on GenAI in the long run but states 'the near-term business model isn't intelligence. It's fear and influence and a false sense of trust.' As Klein's remarks suggest, the promise of AI is being eclipsed by performative adoption. His take cuts through the hype with equal parts technical fluency and commercial realism. Klein isn't alone. As outlined in a recent article from this column—AI Beyond Platforms - How Data Will Unlock New Value In 2025,—one of the most overlooked truths of the current AI cycle is simple: the platform isn't the value. The data is. The $100 Billion Mirage Microsoft's claim of a $3.70 return for every $1 spent on GenAI, cited in a white paper it commissioned, lacked any external validation. No Fortune 500 case studies were included. Meanwhile, AI darlings like OpenAI and Anthropic are running massive deficits. CNBC reports that OpenAI lost $5 billion in 2024, with ChatGPT alone costing an estimated $700,000 per day to operate. Anthropic, according to The Information, burned through $2.7 billion in 2024. These companies aren't profitable. They're surviving on subsidies—from investors, partners, and strategic alliances. Consultants and Cohorts: Selling the AI Dream Klein paints a vivid picture of today's AI marketplace: GPT-powered webinars, $2,000 cohort programs, and 100-slide decks promising to "future-proof" organizations. Klein says, 'This isn't innovation. It's monetized anxiety.' Companies aren't buying transformation—they're buying the appearance of AI readiness. Take Accenture, which reportedly earned $900 million in GenAI revenue last year, with $3 billion in bookings. But most of that came from consulting—not from clients deploying AI at scale. According to Accenture CEO, Julie Sweet, most of Accenture's clients are in 'experimental mode' with generative AI. Their focus, she said, is on cloud, data, and application modernization. Where the Money Actually Is: Human-Centered Data So, where's the real return? It starts with data—specifically, high-quality, forward-looking, zero-party data. Unlike scraped or synthetic alternatives, real people voluntarily provide zero-party data. It includes behavioral intent, psychographics, motivations, and—critically—future spending plans. These inputs are foundational for models that aim to predict, not just summarize. Here are two examples: 3 Quarter ahead forecasts for CVS Revenues XTech-MarchCPI Forecast The takeaway? Many failed GenAI initiatives didn't collapse because of weak models. They failed because they were built on bad data. As Klein argues, AI should augment human intelligence—not just mimic or automate language. The Overlooked Reality: A Data Gap, Not a Tech Gap Klein's critique and the findings in AI Beyond Platforms converge on a core truth: what's holding back AI isn't processing power or better algorithms. It's irrelevant, outdated, or low-signal data. The excitement around GenAI is real. But so is the gap between experimentation and enterprise-scale impact. Demos dazzle. ROI disappoints. Unless organizations fuel their models with reliable, representative, and behaviorally rich data, they'll fall into the trap. From Illusion to Impact: The Path Forward There is real money to be made with AI—but it won't come from bigger models or louder marketing. It will come from solving real problems with clean, context-rich, human-anchored data. As the GenAI boom rolls on, the biggest winners won't be those chasing the next model release. They'll be the ones starting with the right data.

The $100B Illusion - Why Data, Not Hype, Will Drive AI ROI
The $100B Illusion - Why Data, Not Hype, Will Drive AI ROI

Forbes

time08-05-2025

  • Business
  • Forbes

The $100B Illusion - Why Data, Not Hype, Will Drive AI ROI

AI Illusions AdobeStock_636691828 It's tempting to believe we've entered the golden age of artificial intelligence. Headlines tout a $100 billion generative AI market by 2026. CEOs mention 'AI' on nearly every earnings call. Consultants pitch productivity revolutions via PowerPoint. But beneath the surface, a less convenient reality is setting in: for most companies, GenAI is still an expensive experiment—not a source of revenue. Stephen Klein, CEO, asks, 'Are we using GenAI to solve real problems, or just optimizing slide decks?' Klien is bullish on GenAI in the long run but states 'the near-term business model isn't intelligence. It's fear and influence and a false sense of trust.' As Klein's remarks suggest, the promise of AI is being eclipsed by performative adoption. His take cuts through the hype with equal parts technical fluency and commercial realism. Klein isn't alone. As outlined in a recent article from this column—AI Beyond Platforms - How Data Will Unlock New Value In 2025,—one of the most overlooked truths of the current AI cycle is simple: the platform isn't the value. The data is. The $100 Billion Mirage Microsoft's claim of a $3.70 return for every $1 spent on GenAI, cited in a white paper it commissioned, lacked any external validation. No Fortune 500 case studies were included. Meanwhile, AI darlings like OpenAI and Anthropic are running massive deficits. CNBC reports that OpenAI lost $5 billion in 2024, with ChatGPT alone costing an estimated $700,000 per day to operate. Anthropic, according to The Information, burned through $2.7 billion in 2024. These companies aren't profitable. They're surviving on subsidies—from investors, partners, and strategic alliances. Consultants and Cohorts: Selling the AI Dream Klein paints a vivid picture of today's AI marketplace: GPT-powered webinars, $2,000 cohort programs, and 100-slide decks promising to "future-proof" organizations. Klein says, 'This isn't innovation. It's monetized anxiety.' Companies aren't buying transformation—they're buying the appearance of AI readiness. Take Accenture, which reportedly earned $900 million in GenAI revenue last year, with $3 billion in bookings. But most of that came from consulting—not from clients deploying AI at scale. According to Accenture CEO, Julie Sweet, most of Accenture's clients are in 'experimental mode' with generative AI. Their focus, she said, is on cloud, data, and application modernization. Where the Money Actually Is: Human-Centered Data So, where's the real return? It starts with data—specifically, high-quality, forward-looking, zero-party data. Unlike scraped or synthetic alternatives, real people voluntarily provide zero-party data. It includes behavioral intent, psychographics, motivations, and—critically—future spending plans. These inputs are foundational for models that aim to predict, not just summarize. Here are two examples: 3 Quarter ahead forecasts for CVS Revenues Ereteam, based on data from Prosper Insights & Analytics XTech-MarchCPI Forecast Exponential The takeaway? Many failed GenAI initiatives didn't collapse because of weak models. They failed because they were built on bad data. As Klein argues, AI should augment human intelligence—not just mimic or automate language. The Overlooked Reality: A Data Gap, Not a Tech Gap Klein's critique and the findings in AI Beyond Platforms converge on a core truth: what's holding back AI isn't processing power or better algorithms. It's irrelevant, outdated, or low-signal data. The excitement around GenAI is real. But so is the gap between experimentation and enterprise-scale impact. Demos dazzle. ROI disappoints. Unless organizations fuel their models with reliable, representative, and behaviorally rich data, they'll fall into the trap. From Illusion to Impact: The Path Forward There is real money to be made with AI—but it won't come from bigger models or louder marketing. It will come from solving real problems with clean, context-rich, human-anchored data. As the GenAI boom rolls on, the biggest winners won't be those chasing the next model release. They'll be the ones starting with the right data.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store