In early 2025, cybersecurity researchers uncovered a critical vulnerability in Microsoft 365 Copilot, dubbed "EchoLeak," which allowed attackers to extract sensitive user data without any user interaction. This zero-click exploit highlighted the potential risks associated with deeply integrated AI assistants in enterprise environments.
The vulnerability was identified by Aim Labs in January 2025 and promptly reported to Microsoft. By May 2025, Microsoft had implemented a server-side fix, ensuring that users did not need to take any action. The company emphasized that no customers were affected and there was no evidence of real-world exploitation. Nevertheless, EchoLeak is considered the first known zero-click vulnerability targeting a large language model (LLM)-based assistant.
Additionally, in September 2024, Microsoft revised its Copilot+ Recall feature following security and privacy concerns. The feature, which involved screenshot-taking and AI-powered search capabilities, underwent changes to reassure users about its security. Microsoft's Offensive Research & Security Engineering team, along with a third-party security vendor, assessed the feature to address potential vulnerabilities. (helpnetsecurity.com)
Source: The Hans India AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit
Discovery and Disclosure
The vulnerability was identified by Aim Labs in January 2025 and promptly reported to Microsoft. By May 2025, Microsoft had implemented a server-side fix, ensuring that users did not need to take any action. The company emphasized that no customers were affected and there was no evidence of real-world exploitation. Nevertheless, EchoLeak is considered the first known zero-click vulnerability targeting a large language model (LLM)-based assistant.Mechanism of the Exploit
Microsoft 365 Copilot integrates across Office applications such as Word, Excel, Outlook, and Teams, leveraging AI to assist users by analyzing data and generating content based on internal communications. EchoLeak exploited this integration through the following process:- Crafting a Malicious Email: Attackers sent emails that appeared legitimate but contained hidden prompts embedded within the message.
- Activation via User Query: When a user asked Copilot a related question, the AI, utilizing Retrieval-Augmented Generation (RAG), retrieved the malicious email, considering it relevant.
- Data Exfiltration: The concealed prompt instructed Copilot to leak internal data through a link or image.
- Silent Data Transfer: As the email was displayed, the link was automatically accessed by the browser, silently transferring internal data to the attacker's server.
Emergence of LLM Scope Violations
Beyond the technical specifics, EchoLeak signifies the emergence of a new category of threats termed "LLM Scope Violations." These occur when language models unintentionally expose data through their internal processing mechanisms, even without direct user commands. Aim Labs highlighted that this attack chain showcases a novel exploitation technique by leveraging internal model mechanics, cautioning that similar risks could be present in other RAG-based AI systems, not just Microsoft Copilot.Microsoft's Response and Mitigation
Microsoft assigned the flaw the identifier CVE-2025-32711 and categorized it as critical. The company reassured users that the issue had been resolved and that there were no known incidents involving the vulnerability. Despite the fix, researchers warned that the increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defenses.Broader Implications and Related Vulnerabilities
The discovery of EchoLeak is not an isolated incident. In August 2024, Tenable researchers identified a server-side request forgery (SSRF) vulnerability in Microsoft's Copilot Studio, tracked as CVE-2024-38206. This flaw allowed authenticated attackers to bypass SSRF protections and access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances. The vulnerability was promptly addressed by Microsoft, with no customer action required. (tenable.com)Additionally, in September 2024, Microsoft revised its Copilot+ Recall feature following security and privacy concerns. The feature, which involved screenshot-taking and AI-powered search capabilities, underwent changes to reassure users about its security. Microsoft's Offensive Research & Security Engineering team, along with a third-party security vendor, assessed the feature to address potential vulnerabilities. (helpnetsecurity.com)
Conclusion
The EchoLeak vulnerability serves as a stark reminder of the evolving security challenges posed by AI integrations in enterprise systems. As AI assistants become more deeply embedded into business workflows, it is imperative for organizations to continuously assess and fortify their security measures to keep pace with emerging threats.Source: The Hans India AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit