
First ever security flaw detected in an AI agent, could allow hacker to attack user via email
Here's how the attack worked:-An attacker sends a business-like email to the target. The email contains text that looks normal but hides a special prompt, designed to confuse the AI assistant.-When the user later asks a related question to Copilot, the system retrieves the earlier email using its Retrieval-Augmented Generation (RAG) engine, thinking it's relevant to the query.advertisement-At this point, the hidden prompt is activated. It silently instructs the AI to extract internal data and place it in a link or image.-When the email is displayed, the embedded link is automatically accessed by the browser – sending internal data to the attacker's server without the user realising anything has gone wrong.Some of the markdown image formats used in the attack are designed to make browsers send automatic requests, which made this data exfiltration possible.Though Microsoft uses Content Security Policies (CSP) to block requests to unknown websites, services like Microsoft Teams and SharePoint are trusted by default. This allowed attackers to bypass certain defences.A new kind of AI vulnerabilityEchoLeak is more than just a software bug – it introduces a new class of threats known as LLM Scope Violations. This term refers to flaws in how large language models handle and leak information without being directly instructed by a user. In its report, Aim Labs warned that these kinds of vulnerabilities are especially dangerous in enterprise environments, where AI agents are deeply integrated into internal systems.'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs said. The team believes the same risk could exist in other RAG-based AI systems, not just Microsoft's. Because EchoLeak required no user interaction and could work in fully automated ways, Aim Labs says it highlights the kind of threats that might become more common as AI becomes more embedded in business operations.advertisementMicrosoft labelled the vulnerability as critical, assigned it CVE-2025-32711, and released a server-side fix in May. The company reassured users that no exploit had taken place and that the issue is now resolved.Even though no damage was done, researchers say the warning is clear. 'The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' the report from Aim Labs reads.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
29 minutes ago
- NDTV
Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton. "Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month. "I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking." Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret. Warning about AI Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue. "It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time. "I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control." Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts. In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening. In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.


Time of India
43 minutes ago
- Time of India
Anthropic CEO throws shade at Mark Zuckerberg's billion-dollar AI talent hunt with dartboard dig: ‘You can't buy purpose with a paycheck'
Culture Over Cash You Might Also Like: Billionaire Vinod Khosla predicts AI teachers will disrupt education and careers. Here's how — BigTechPod (@BigTechPod) The AI Hiring Wars: A Battle for Brains Buying Purpose? Not Quite, Says Amodei In the escalating turf war for top AI talent, Anthropic CEO Dario Amodei has delivered a pointed, and slightly humorous, critique of Meta 's aggressive recruitment tactics. Speaking on the Big Technology Podcast, Amodei painted a vivid picture: "If Mark Zuckerberg throws a dart at a dartboard and it hits your name, that doesn't mean you should be paid ten times more than the guy next to you who's just as skilled."His remarks come amid widespread reports of Meta launching an all-out offensive to poach AI engineers from rivals like OpenAI, Apple, Google, and Anthropic itself. Yet Amodei claims his startup has remained largely untouched. 'Some [employees] wouldn't even talk to Meta,' he said, asserting that their culture and mission are more attractive than any compensation package Meta can has reportedly been dangling massive offers, with some packages surpassing $200 million for a single hire, according to Business Insider and WIRED. Amodei, however, says Anthropic refuses to match such sums, insisting on fair and consistent pay across the board."I recently posted in our company Slack that we will not compromise our compensation principles or fairness if someone gets a big offer," he shared. In his view, rewarding one employee disproportionately just because they were on Meta's radar would be unjust to their equally capable this stance, Meta has managed to lure away at least one former Anthropic engineer—Joel Pobar—but Amodei suggests their broader impact has been latest AI moonshot, the Superintelligence Lab , has ignited a fierce scramble for elite minds. OpenAI's Chief Research Officer Mark Chen likened it to a break-in after losing several staffers overnight. Meanwhile, OpenAI CEO Sam Altman accused Meta of deploying 'giant offers' to lure talent, with some signing bonuses rumored to top $100 is unapologetic about the ambition. In an internal memo seen by CNBC, he claimed, 'Developing superintelligence is coming into sight,' declaring his goal to bring personal AI to every individual, not just enterprise Meta may have the resources, Amodei questions whether mission-driven AI work can be bought. 'Zuckerberg is trying to buy something that can't be bought,' he said during the podcast, underscoring Anthropic's long-term focus on safe and ethical AI sentiment resonates with other industry leaders too. OpenAI continues to frame itself as a purpose-first organization, while Meta's flashier, big-money moves risk creating tension even within its own teams. As CNBC reported, some insiders at Meta worry that a talent-heavy, cash-fueled approach could lead to ego clashes and fractured the current AI landscape, where demand far outpaces supply, the value of a skilled AI researcher is rivaling that of a professional athlete. Yet, for companies like Anthropic and OpenAI, the real challenge isn't just retaining talent—it's maintaining a sense of purpose amid the frenzy.


Time of India
3 hours ago
- Time of India
Even OpenAI's chairman struggles to keep up with AI: Bret Taylor calls the once-in-a-lifetime boom ‘insane'
— plzaccelerate (@plzaccelerate) Living through a technological renaissance You Might Also Like: Billionaire Vinod Khosla predicts AI teachers will disrupt education and careers. Here's how A human job in an AI world Bill Gates agrees: AI is a tool, not the artist You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why If you've been struggling to keep pace with the whirlwind that is Artificial Intelligence , you're in good company. Bret Taylor , Chairman of OpenAI , the organization at the epicenter of the AI revolution , admits he too is barely able to stay afloat amid the relentless stream of a candid conversation hosted by South Park Commons with Aditya Agarwal, Taylor said, 'I am the chairman of OpenAI. I run a fairly successful applied AI company, and I have trouble keeping up with everything going on.' His words offer a rare moment of vulnerability in a world that often presents AI experts as unflappable makes his admission particularly striking is his vantage point. Taylor is not just on the frontline — he's in the command tower. From overseeing OpenAI's advancements to observing the competition's rapid rise, his plate is full. And yet, even he finds it dizzying. 'I'm probably most well situated in the world almost to do so… So it just feels insane to me right now,' he sees this turbulent moment as historic — and oddly poetic. 'I think it's a privilege... I hope you're enjoying being in this moment because... I think our society will be very different 10 years from now,' he said, reflecting on how rare it is to consciously live through such a transformative era. 'I pinch myself every day.'Indeed, the AI domain is experiencing something akin to a gold rush — except instead of panning rivers, companies are mining data and releasing new models almost weekly. OpenAI, once the undisputed leader, is now facing heated competition. Google's Gemini, Elon Musk's Grok, and emerging Chinese open-source platforms like DeepSeek and Kimi have challenged its dominance with increasingly capable on the product side, innovation is relentless. ChatGPT has become the fifth most visited website globally, but it's far from alone. New AI tools tackling niche tasks are sprouting up daily. OpenAI reportedly even attempted to acquire Windsurf, a rising AI startup — a sign of how closely it watches the this pace, Taylor offers a reassuring message: humans aren't being pushed out of the equation just yet. Speaking to Business Insider, he argued that formal computer science education remains more relevant than ever. 'Studying computer science is a different answer than learning to code, but I would say I still think it's extremely valuable,' he emphasized that such degrees instill systems thinking — a way of understanding how components interact in complex systems, which remains vital for innovation. He pointed out how topics like Big O notation, cache misses, and randomized algorithms teach the kind of structured logic that no AI model can fully Taylor's view is none other than Microsoft co-founder Bill Gates. In conversations on The Tonight Show, Gates predicted that programming will "remain a human job for at least a century.' His reason? Writing software isn't about typing code; it's about pattern recognition, judgment, and making creative like GitHub Copilot and ChatGPT may streamline debugging and accelerate development, but Gates insists, 'They are power chisels, not replacement carpenters.' AI may help you shape the material, but the blueprint still comes from the human mind.