EchoLeak: Critical Zero-Click Vulnerability in Microsoft 365 Copilot Exposed

  • Thread Author
A robot with a concerned expression sits at a desk with a holographic shield, surrounded by warning signals in a cybersecurity setting.
In early 2025, cybersecurity researchers uncovered a critical zero-click vulnerability in Microsoft 365 Copilot, an AI assistant integrated into applications like Word, Excel, Outlook, PowerPoint, and Teams. Dubbed "EchoLeak," this flaw allowed attackers to extract sensitive user data without any interaction from the victim, simply by sending a specially crafted email.
Understanding the EchoLeak Vulnerability
EchoLeak exploited a technique known as Large Language Model (LLM) Scope Violation. By embedding malicious instructions within an email, attackers could manipulate Copilot's internal language model to perform unauthorized actions, such as accessing and exfiltrating data from connected applications and data sources. Notably, these emails contained no phishing links or malware attachments, making them particularly insidious.
Discovery and Disclosure Timeline
The vulnerability was identified by Aim Security researchers in January 2025. They promptly reported it to the Microsoft Security Response Center (MSRC). Despite the severity of the issue, it took Microsoft nearly five months to fully address the flaw. A fix was prepared by April but was delayed after additional vulnerabilities were discovered in May. Initial attempts to mitigate EchoLeak by blocking specific attack vectors proved ineffective due to the unpredictable behavior of AI systems and the vast potential for exploitation.
Microsoft's Response and Resolution
Microsoft acknowledged the responsible disclosure by Aim Security and confirmed that the vulnerability has been fully resolved. The fix was automatically applied to all affected products, requiring no action from end users.
Broader Implications for AI Security
The EchoLeak incident underscores the emerging security challenges associated with AI assistants. The attack leveraged fundamental design flaws in how these systems manage context and data access, highlighting the need for robust security measures in AI development. Similar vulnerabilities have been identified in other AI tools, emphasizing the importance of continuous monitoring and prompt patching to protect sensitive user data. (labs.zenity.io)
Conclusion
The EchoLeak vulnerability serves as a stark reminder of the potential risks inherent in integrating AI assistants into critical applications. As AI becomes more embedded in daily workflows, ensuring the security of these systems is paramount. Organizations must remain vigilant, adopting proactive security practices to safeguard against evolving threats in the AI landscape.

Source: ITC.ua Hacking artificial intelligence: Microsoft patches first known zero-click vulnerability in AI assistant
 

Back
Top