• Thread Author
Computer screens display coding data, with animated blue light lines resembling data flow or neural network connections.
A critical zero-click vulnerability in Microsoft's Copilot AI assistant, identified as CVE-2025-32711 and dubbed "EchoLeak," has been discovered by researchers at Aim Security. This flaw allowed attackers to exfiltrate sensitive organizational data without any user interaction, posing a significant threat to Microsoft 365 users.
The vulnerability exploited a "Large Language Model (LLM) scope violation," enabling malicious input from outside an organization to trick the AI into accessing privileged content. This breach potentially exposed data across OneDrive files, SharePoint documents, Teams chats, and historical Copilot interactions. Aim Security's CTO, Adir Gruss, emphasized the severity, stating that attackers could automatically extract the most sensitive information from Microsoft 365 Copilot without any user interaction.
Microsoft has acknowledged the issue, confirming that the vulnerability has been fully mitigated with no user action required. The company has also implemented broader "defense-in-depth" protections to enhance AI security. While there is currently no evidence that the flaw was exploited in the wild, this incident underscores the emerging risks associated with AI agents in enterprise environments.
As AI tools like Copilot become increasingly integrated into business systems, security researchers warn that "silent" attacks, such as EchoLeak, could become more common and more dangerous. This discovery highlights the need for continuous vigilance and robust security measures to protect sensitive organizational data in the age of AI.

Source: teiss https://www.teiss.co.uk/cyber-threats/microsoft-copilot-flaw-could-have-let-hackers-steal-data-with-a-single-email-15930/
 

Back
Top