A critical zero-click vulnerability in Microsoft's Copilot AI assistant, dubbed EchoLeak and tracked as CVE-2025-32711, was recently discovered by researchers at Aim Security. This flaw allowed attackers to exfiltrate sensitive organizational data without any user interaction, posing a significant threat to Microsoft 365 users.
The vulnerability exploited a "Large Language Model (LLM) scope violation," enabling malicious input from external sources to trick the AI into accessing privileged content. Potentially exposed data included OneDrive files, SharePoint documents, Teams chats, and historical Copilot interactions—essentially any information within the AI's scope.
Aim Security reported that the flaw existed in Copilot's default settings, potentially placing most customers at risk until a recent fix was deployed. However, there is currently no evidence that the flaw was exploited in the wild. Microsoft has confirmed that the vulnerability has been fully mitigated, with no user action required, and has announced the rollout of broader "defense-in-depth" protections to strengthen AI security.
This discovery highlights the emerging risks associated with AI agents in enterprise environments. As AI tools like Copilot become more embedded in business systems, security researchers warn that "silent" attacks like EchoLeak could become more common and more dangerous.
The EchoLeak vulnerability underscores the importance of robust security measures in AI systems. Organizations should ensure that AI tools are configured securely and that any vulnerabilities are promptly addressed to prevent potential data breaches.
Source: teiss https://www.teiss.co.uk/news/microsoft-copilot-flaw-could-have-let-hackers-steal-data-with-a-single-email-15930/