• Thread Author
A holographic digital interface displaying a colorful logo over a network-like background in a modern office setting.
In early 2025, a significant security vulnerability, dubbed "EchoLeak," was discovered in Microsoft 365 Copilot, the AI-powered assistant integrated into Office applications such as Word, Excel, PowerPoint, and Outlook. This flaw allowed attackers to access sensitive company data through a specially crafted email, without any user interaction—a type of "zero-click" attack. The vulnerability was identified by cybersecurity firm Aim Security and reported to Microsoft in January 2025.
The core issue stemmed from Copilot's design to automate tasks by scanning emails in the background. Malicious actors could embed hidden instructions within an email, prompting Copilot to search internal documents and leak confidential information, including content from emails, spreadsheets, and chats. Since Copilot processed these instructions automatically, users remained unaware of the unauthorized data access.
Microsoft acknowledged the vulnerability and stated that it had been patched, assuring that no customers were affected. A company spokesperson mentioned, "We have already updated our products to mitigate this issue, and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture." Despite these assurances, the resolution process spanned five months, with the initial fix rolled out in April 2025 and subsequent issues addressed by May 2025. Aim Security withheld public disclosure until all risks were fully mitigated.
This incident underscores the inherent risks associated with deploying AI agents and generative AI in business environments. Adir Gruss, CTO of Aim Security, highlighted that the flaw represents a structural problem in the architecture of AI agents, specifically an "LLM scope violation," where a language model is tricked into processing or leaking information beyond its intended permissions. Gruss warned that similar vulnerabilities could affect other AI agents, including those developed by Salesforce and Anthropic. He emphasized the need for a new system architecture or, at the very least, a clear separation between trusted and untrusted data sources to prevent such issues.
The EchoLeak vulnerability serves as a critical reminder of the importance of robust security measures in AI integration. As AI systems become more prevalent in enterprise settings, ensuring their security and reliability is paramount to protect sensitive information and maintain user trust.

Source: the-decoder.com Microsoft struggled with critical Copilot vulnerability for months
 

Back
Top