logo
#

Latest news with #EchoLeak

The silent threat in your AI stack: Why EchoLeak is a wake-up call for CXOs
The silent threat in your AI stack: Why EchoLeak is a wake-up call for CXOs

Time of India

time14-06-2025

  • Business
  • Time of India

The silent threat in your AI stack: Why EchoLeak is a wake-up call for CXOs

Imagine your AI assistant, diligently sorting emails, scheduling meetings, and managing internal documents—all without a hitch. Now picture that same trusted assistant quietly leaking sensitive company data to attackers. No phishing, no malware, no alerts—just quiet, invisible data isn't theoretical—it recently happened with Microsoft 365 Copilot. Researchers at Aim Security identified a vulnerability nicknamed "EchoLeak," the first zero-click exploit targeting enterprise AI agents. For CXOs, it's a loud wake-up call that AI threats have entered an entirely new Exactly Happened?Attackers used what's called "prompt injection," essentially tricking the AI with innocent-looking emails. Copilot, thinking it was merely being helpful, unknowingly accessed sensitive internal files and emails, sharing this confidential information through hidden links—all without a single click from any Microsoft quickly patched the issue, the implications are far-reaching: AI security risks can't be handled by traditional defenses alone. This incident, though contained, reveals a troubling blind Should This Matter to CXOs?AI agents like Copilot aren't just peripheral tools anymore—they're integrated deeply into critical workflows: email, document management, customer service, even strategic decision-making. The EchoLeak flaw highlights how easily trusted AI systems can be exploited, entirely bypassing conventional security measures. As Aim Security CTO Adir Gruss told Fortune: "EchoLeak isn't an isolated event; it signals a new wave of AI-native vulnerabilities. We need to rethink how enterprise trust boundaries are defined." Four Steps Every CXO Must Take Now: Audit AI Visibility: Understand exactly what data your AI agents can access. If they see it, attackers potentially can AI Autonomy: Be cautious about which tasks you automate. Sensitive actions—sending emails, sharing files—should always involve human Your Vendors Rigorously: Explicitly ask providers how they're protecting against prompt injection attacks. Clear, confident answers are AI Security a Priority: Bring your cybersecurity and risk teams into AI conversations early—not after deployment. Redefining AI Trust for CXOs: The EchoLeak incident is a powerful reminder that CXOs can't afford complacency in AI security. As AI moves deeper into critical operations, the security lens must shift from reactive patching to proactive, strategic oversight. AI tools hold immense promise—but without rethinking security from the ground up, that promise could become your organization's next big liability. Social Media Copy: AI is moving fast, but new threats are emerging faster. CXOs, EchoLeak is your wake-up call to rethink AI security—before it's too late.

Security researchers found a zero-click vulnerability in Microsoft 365 Copilot.
Security researchers found a zero-click vulnerability in Microsoft 365 Copilot.

The Verge

time13-06-2025

  • The Verge

Security researchers found a zero-click vulnerability in Microsoft 365 Copilot.

The vulnerability, called 'EchoLeak,' lets attackers 'automatically exfiltrate sensitive and proprietary information' from Microsoft 365 Copilot without knowledge of the user, according to findings from Aim Labs. An attacker only needs to send their victim a malicious prompt injection disguised as a normal email, which covertly instructs Copilot to pull sensitive information from a user's account. Microsoft has since fixed the critical flaw and given it the identifier CVE-2025-32711. It also hasn't been exploited in the wild.

Researchers find 'dangerous' AI data leak flaw in Microsoft 365 Copilot: What the company has to say
Researchers find 'dangerous' AI data leak flaw in Microsoft 365 Copilot: What the company has to say

Time of India

time13-06-2025

  • Time of India

Researchers find 'dangerous' AI data leak flaw in Microsoft 365 Copilot: What the company has to say

A critical artificial intelligence (AI) vulnerability has been discovered in Microsoft 365 Copilot, raising new concerns about data security in AI-integrated enterprise environments. The flaw, dubbed 'EchoLeak', which enabled attackers to exfiltrate sensitive user data with zero-click interaction, has been devised by Aim Labs researchers in January 2025. According to a report by Bleeping Computer, Aim Labs promptly reported their findings to Microsoft, which rated it as critical. Microsoft swiftly addressed the issue, implementing a server-side fix in May 2025. This means that no user action is required to patch the vulnerability. Microsoft has also stated there is no evidence of any real-world exploitation, essentially confirming that no customers were impacted by this flaw. What is EchoLeak attack and how it worked by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Trade Bitcoin & Ethereum – No Wallet Needed! IC Markets Start Now Undo The EchoLeak attack commenced with a malicious email sent to the target. This email contained text seemingly unrelated to Copilot, designed to resemble a typical business document. It embedded a hidden prompt injection crafted to instruct Copilot's underlying LLM to extract sensitive internal data. Because this hidden prompt was phrased like a normal message, it cleverly bypassed Microsoft's existing XPIA (cross-prompt injection attack) classifier protections. Microsoft 365 Copilot, an AI assistant integrated into Office applications like Word, Excel, Outlook, and Teams, leverages OpenAI's GPT models and Microsoft Graph to help users generate content, analyse data and answer questions based on their organisation's internal files, emails, and chats. When the user prompted Copilot with a related business question, Microsoft's Retrieval-Augmented Generation (RAG) engine retrieved the malicious email into the LLM's prompt context due to its apparent relevance and formatting. Once inside the LLM's active context, the malicious injection "tricked" the AI into pulling sensitive internal data and embedding it into a specially crafted link or image. This led to unintentional leaks of internal data without explicit user intent or interaction.

AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit
AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

Hans India

time12-06-2025

  • Hans India

AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

In a major first for the AI security landscape, researchers have identified a critical vulnerability in Microsoft 365 Copilot that could have allowed hackers to steal sensitive user data—without the user ever clicking a link or opening an attachment. Known as EchoLeak, this zero-click flaw revealed how deeply embedded AI assistants can be exploited through subtle prompts hidden in regular-looking emails. The vulnerability was discovered by Aim Labs in January 2025 and promptly reported to Microsoft. It was fixed server-side in May, meaning users didn't need to take any action themselves. Microsoft emphasized that no customers were affected, and there's no evidence that the flaw was exploited in real-world scenarios. Still, the discovery marks a historic moment, as EchoLeak is believed to be the first-ever zero-click vulnerability targeting a large language model (LLM)-based assistant. How EchoLeak Worked Microsoft 365 Copilot integrates across Office applications like Word, Excel, Outlook, and Teams. It utilizes AI, powered by OpenAI's models and Microsoft Graph, to help users by analyzing data and generating content based on internal emails, documents, and chats. EchoLeak took advantage of this feature. Here's a breakdown of the exploit process: A malicious email is crafted to look legitimate but contains a hidden prompt embedded in the message. When a user later asks Copilot a related question, the AI, using Retrieval-Augmented Generation (RAG), pulls in the malicious email thinking it's relevant. The concealed prompt is then activated, instructing Copilot to leak internal data through a link or image. As the email is displayed, the link is automatically accessed by the browser, silently transferring internal data to the attacker's server. Researchers noted that certain markdown image formats used in the email could trigger browsers to send automatic requests, enabling the leak. While Microsoft's Content Security Policies (CSP) block most unknown web requests, services like Teams and SharePoint are considered trusted by default—offering a way in for attackers. The Bigger Concern: LLM Scope Violations The vulnerability isn't just a technical bug—it signals the emergence of a new category of threats called LLM Scope Violations. These occur when language models unintentionally expose data through their internal processing mechanisms, even without direct user commands. 'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs stated in their report. They also cautioned that similar risks could be present in other RAG-based AI systems, not just Microsoft Copilot. Microsoft assigned the flaw the ID CVE-2025-32711 and categorized it as critical. The company reassured users that the issue has been resolved and that there were no known incidents involving the vulnerability. Despite the fix, the warning from researchers is clear: "The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' their report concludes. As AI agents become more integrated into enterprise systems, EchoLeak is a stark reminder that security in the age of intelligent software needs to evolve just as fast as the technology itself.

Researchers discover zero-click vulnerability in Microsoft Copilot
Researchers discover zero-click vulnerability in Microsoft Copilot

The Hindu

time12-06-2025

  • The Hindu

Researchers discover zero-click vulnerability in Microsoft Copilot

Researchers have said that Microsoft Copilot had a critical zero-click AI vulnerability that was fixed before hackers stole sensitive data. Called 'EchoLeak,' the attack was mounted by Aim Labs researchers in January this year and then reported to Microsoft later. In a blog posted by the research team, they said that EchoLeak was the first zero-click attack on an AI agent and could hack remotely via an email. The vulnerability was given the identifier CVE-2025-32711 and rated critical and fixed eventually in May. The researchers have categorised EchoLeak under a new class of vulnerabilities called 'LLM Scope Violation,' which can lead a large language model to leak internal data without any interaction with the hacker. Although Microsoft acknowledged the security flow, it confirmed that there had been no instance of exploitation which had impacted users. Users receive an email that's been designed to look like a business document embedded with a hidden prompt injection that instructs the LLM to extract and exfiltrate sensitive data. When the user asks Copilot a query the email is retrieved into the LLM prompt by Retrieval-Augmented Generation or RAG.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store