
In June 2025, a critical "zero-click" vulnerability, designated as CVE-2025-32711, was identified in Microsoft 365 Copilot, an AI-powered assistant integrated into Microsoft's suite of productivity tools. This flaw, dubbed "EchoLeak," had a CVSS score of 9.3, indicating its severity. It allowed attackers to exfiltrate sensitive data from users' Microsoft 365 services—including Outlook emails, OneDrive files, SharePoint documents, and Teams chat history—without any user interaction.
Understanding the Vulnerability
The EchoLeak exploit chain involved several sophisticated techniques:
- Bypassing Cross-Prompt Injection Attack (XPIA) Classifiers: Attackers crafted emails that addressed instructions to the recipient rather than directly to the large language model (LLM), effectively circumventing Copilot's defenses against prompt injections.
- Evading Link Redaction Features: Copilot typically redacts external markdown links to prevent malicious content. However, researchers discovered that links marked as references (e.g.,
[ref]
in markdown) were not redacted, allowing them to appear in Copilot's chat outputs. - Leveraging External Markdown Images: Instead of relying on user clicks, attackers used external markdown images to trigger automated GET requests. While Copilot's Content Security Policy (CSP) restricts image embeds to specific Microsoft domains, this was bypassed by exploiting a Microsoft Teams URL format (
/urlp/v1/url/content
) that allowed external URLs to be accessed via trusted domains.
By sending a specially crafted email, an attacker could instruct Copilot to append sensitive Microsoft 365 data to the end of an image URL as query string parameters. When Copilot processed this email, it would automatically generate a GET request for the image, inadvertently transmitting the sensitive data to the attacker's external server. This method required no user interaction, making it particularly insidious.
Broader Implications
The EchoLeak vulnerability underscores a significant shift in cybersecurity risks associated with AI-driven tools. As noted by Ensar Seker of SOCRadar, this exploit highlights how even well-guarded AI agents like Microsoft 365 Copilot can be weaponized through what Aim Labs terms an "LLM Scope Violation." This indicates a broader architectural flaw across AI assistants, necessitating stricter input scoping and a clear separation between trusted and untrusted content.
Mitigation and Recommendations
Microsoft promptly addressed the vulnerability, stating that the AI command injection flaw had not been exploited in the wild and required no further user action to resolve. However, to defend against similar attacks, organizations are advised to:
- Disable External Email Ingestion: Prevent RAG tools like Copilot from processing untrusted external emails.
- Enforce Data Loss Prevention (DLP) Tags: Flag requests involving sensitive information to prevent unauthorized data access.
- Apply Prompt-Level Filters: Block suspicious links and structured outputs that could be used in prompt injection attacks.
Source: SC Media Microsoft 365 Copilot ‘zero-click’ vulnerability enabled data exfiltration