A chilling new wave of cyber threats has emerged at the intersection of artificial intelligence and enterprise productivity suites, exposing deep-rooted vulnerabilities in widely adopted platforms such as Microsoft 365 Copilot. Among the most unsettling of these discoveries is a “zero-click” AI vulnerability, code-named EchoLeak, that enables attackers to siphon sensitive corporate data with no user interaction—a scenario long feared by cybersecurity experts as “nightmare fuel” for digital espionage and insider threat campaigns.
EchoLeak, officially catalogued as CVE-2025-32711 with a formidable CVSS score of 9.3, embodies the essence of next-generation security risks spawned by the rapid integration of large language models (LLMs) into cloud-powered productivity environments. According to Aim Security—the Israeli research firm that first exposed the vulnerability and coordinated disclosure with Microsoft—EchoLeak leverages an LLM Scope Violation: a subtle but devastating form of indirect prompt injection.
Unlike simple prompt injection attacks, which require a targeted user to manually enter or accept malicious prompts, indirect injections operate at system scale, exploiting trust relationships and invisible context blending. This evolution of the attack landscape is forcing security professionals to recalibrate their approaches, as malicious actors increasingly look to weaponize the composite contexts inherent to enterprise-grade AI.
However, the incident lays bare the ongoing tension between rapid feature deployment and robust security vetting in AI-powered environments. Even with a patch in place, the design patterns that made this exploit possible—chiefly, the expansive context windows and aggregation of disparate data sources—remain essential to Copilot’s core value proposition. This means that even well-patched systems remain exposed to classes of attacks that may not yet have been discovered or disclosed.
Security researcher Simcha Kosman cautioned that “as LLM agents become more capable and autonomous, their interaction with external tools through protocols like MCP will define how safely and reliably they operate,” highlighting that tool poisoning attacks “expose critical blind spots in current implementations.” Advanced versions (ATPA) blur the lines further—tools designed with intentionally deceptive but syntactically valid behavior can mislead agents, driving them to leak internal data or run unauthorized code.
GitHub’s Jaroslav Lobacevski explained that “if the resolved IP address of the web page host changes, the browser doesn’t take it into account and treats the webpage as if its origin didn’t change. This can be abused by attackers.” For enterprises running MCP servers on localhost for agent orchestration, the danger is clear: SSE’s long-lived connections can be hijacked by adversaries to pivot from external phishing domains deep into the heart of internal, trusted data flows.
Recognizing the gravity of this risk, the industry has started responding proactively. SSE was deprecated in late 2024 in favor of Streamable HTTP, with clear vendor guidance to enforce authentication on MCP servers and validate the
Security experts broadly agree that fixes at the application layer, while important, cannot alone address the “fundamental architectural issues” created by exposing agentic AI to untrusted domains. Risk mitigation will depend on a combination of:
Notable Strengths:
Patching the latest CVE or following today’s best practices will not be enough. Organizations must internalize a new security mindset, where every interaction between human, AI, and infrastructure is scrutinized for hidden channels of exploitation. The sheer scale, speed, and subtlety of zero-click AI vulnerabilities demand nothing less than a wholesale reevaluation of both technical and organizational safeguards.
With threats growing ever more sophisticated, the good news is that awareness is building—and with it, a clearer path toward resilience in the face of evolving, AI-driven cyber adversaries.
Source: The Hacker News Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction
Understanding EchoLeak: Anatomy of a Zero-Click AI Exploit
EchoLeak, officially catalogued as CVE-2025-32711 with a formidable CVSS score of 9.3, embodies the essence of next-generation security risks spawned by the rapid integration of large language models (LLMs) into cloud-powered productivity environments. According to Aim Security—the Israeli research firm that first exposed the vulnerability and coordinated disclosure with Microsoft—EchoLeak leverages an LLM Scope Violation: a subtle but devastating form of indirect prompt injection.How the Attack Works
- Phase 1: Injection
The attacker dispatches an innocuous-looking email to an employee’s Outlook inbox. The message contains crafted content designed to exploit how Microsoft 365 Copilot handles untrusted inputs. - Phase 2: Legitimate User Query
An employee interacts with Copilot in a seemingly safe way, for instance by asking it to summarize or analyze a business document. - Phase 3: Scope Violation via RAG Engine
Copilot, through its Retrieval-Augmented Generation (RAG) engine, inadvertently blends the maliciously crafted email text with confidential business data as part of a single LLM context window. - Phase 4: Data Exfiltration
Sensitive internal data are then exfiltrated through outbound Microsoft Teams messages or SharePoint URLs, without the user ever realizing their data has been compromised.
Scope Violations, Indirect Prompt Injection, and RAG Risks
To contextualize EchoLeak and its broader implications, it’s important to grasp the nature of LLM Scope Violations. These occur when attacker instructions, embedded in seemingly benign but untrusted content—like external email messages—manage to “trick” an AI system into accessing and manipulating privileged, internal data. In the Copilot context, the use of Retrieval-Augmented Generation (RAG) exacerbates this risk. RAG dynamically feeds content from disparate sources—including emails, calendar entries, reports—into the model’s context window. If any portion of that content contains a cleverly disguised prompt or command, it may manipulate the model’s behavior beyond what developers intended, potentially causing it to spill sensitive information automatically.Unlike simple prompt injection attacks, which require a targeted user to manually enter or accept malicious prompts, indirect injections operate at system scale, exploiting trust relationships and invisible context blending. This evolution of the attack landscape is forcing security professionals to recalibrate their approaches, as malicious actors increasingly look to weaponize the composite contexts inherent to enterprise-grade AI.
Microsoft’s Response: Patch Delivered, But Is the Risk Gone?
Microsoft responded with agility after Aim Security’s disclosure, issuing a patch as part of its June 2025 Patch Tuesday updates and categorizing the vulnerability as “critical.” The software giant stated, “AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network.” According to available advisories and statements, there is no evidence that EchoLeak has been exploited in the wild.However, the incident lays bare the ongoing tension between rapid feature deployment and robust security vetting in AI-powered environments. Even with a patch in place, the design patterns that made this exploit possible—chiefly, the expansive context windows and aggregation of disparate data sources—remain essential to Copilot’s core value proposition. This means that even well-patched systems remain exposed to classes of attacks that may not yet have been discovered or disclosed.
Tool Poisoning and MCP: The Next Evolution in AI Attack Surfaces
EchoLeak’s disclosure coincides with intensive research on related attack vectors targeting the Model Context Protocol (MCP) and its supporting ecosystem. CyberArk, a respected security research firm, recently detailed a new category of “tool poisoning” attacks, wherein adversaries target not just the description field in tool schemas—but extend malicious payloads to every part of a tool’s definition.Full-Schema Poisoning (FSP) and Advanced Tool Poisoning Attacks (ATPA)
Historically, MCP tools have relied on optimistic trust models, presuming that syntactic correctness in schemas equates to semantic safety when, in reality, every field is a potential attack vector. In one scenario, a tool may appear benign until it handles a “fake error message” designed to trigger the LLM agent into accessing sensitive content, such as SSH keys, under the guise of diagnostic assistance.Security researcher Simcha Kosman cautioned that “as LLM agents become more capable and autonomous, their interaction with external tools through protocols like MCP will define how safely and reliably they operate,” highlighting that tool poisoning attacks “expose critical blind spots in current implementations.” Advanced versions (ATPA) blur the lines further—tools designed with intentionally deceptive but syntactically valid behavior can mislead agents, driving them to leak internal data or run unauthorized code.
MCP Vulnerabilities in the Wild: The GitHub Toxic Agent Flow
The real-world impact of these flaws was recently demonstrated with a critical vulnerability in GitHub’s popular MCP integration. Researchers from Invariant Labs found that an attacker could hijack an agent by crafting a malicious GitHub issue. When prompted to “take a look at the issues,” a Copilot or other agent could unwittingly execute a payload embedded within the public repository’s issue tracker—a technique researchers have dubbed “toxic agent flow.” This cross-system exploitation suggests that vulnerabilities in agent-driven environments cannot be mitigated solely by patching individual services. Instead, they demand foundational changes in permissioning and rigorous monitoring between all agents, tools, and data sources.Enterprise Risk: DNS Rebinding and SSE Abuse
As the Model Context Protocol becomes the connective tissue for enterprise automation and agentic applications, new attack surfaces emerge—oftentimes in indirect or overlooked avenues. One particularly insidious example is the rising threat of DNS rebinding attacks that exploit the Server-Sent Events (SSE) protocol.What Is a DNS Rebinding Attack?
DNS rebinding manipulates how a browser interprets cross-origin requests, tricking it into treating an attacker-controlled external domain as part of the victim’s internal network. This is achieved when a user visits a malicious website that causes the browser to resolve an external domain to an internal IP address after the initial handshake. Once rebinding occurs, the attacker can force the victim’s browser to interact with internal resources—as if operating from a trusted local context.GitHub’s Jaroslav Lobacevski explained that “if the resolved IP address of the web page host changes, the browser doesn’t take it into account and treats the webpage as if its origin didn’t change. This can be abused by attackers.” For enterprises running MCP servers on localhost for agent orchestration, the danger is clear: SSE’s long-lived connections can be hijacked by adversaries to pivot from external phishing domains deep into the heart of internal, trusted data flows.
Recognizing the gravity of this risk, the industry has started responding proactively. SSE was deprecated in late 2024 in favor of Streamable HTTP, with clear vendor guidance to enforce authentication on MCP servers and validate the
"Origin"
header on all incoming requests.Security Best Practices in the Age of AI Agents
The revelations around EchoLeak, tool poisoning, and DNS rebinding share a unifying lesson: the incredible power and flexibility of AI agents in enterprise settings drastically increase the stakes of application design flaws and protocol oversights. Here are the most important takeaways for any organization leveraging Microsoft 365 Copilot, MCP, or similar technologies:1. Treat All Untrusted Content as a Potential Attack Vector
Assume that any content from outside the organization—including emails, calendar invites, and externally shared documents—can contain hidden instructions or exploits. Filtering, sanitizing, and, whenever possible, isolating this untrusted input is paramount.2. Practice Defensive Context Management
Reevaluate how your AI platforms aggregate and blend context from multiple sources. Avoid combining external data with privileged or sensitive internal content within a single LLM query, unless it’s strictly necessary and all sources are vetted.3. Implement Least-Privilege Permissions and Audit Trails
AI-powered agents, especially those interwoven with third-party tools or protocols like MCP, must be strictly permissioned to access only the data and repositories necessary for their job. Continuous auditing, logging, and periodic review of agent interactions are essential.4. Validate Source and Authentication on All Networked AI Interactions
Particularly for MCP servers and streaming protocols, enforce authentication and validate the origin of all incoming requests. Never assume that internal connections are inherently safe.5. Patch Quickly, but Prepare for Unknown Unknowns
While Microsoft’s rapid patching of EchoLeak is commendable, the underlying architectural issues flagged by both Aim Security and CyberArk suggest that novel variants are all but inevitable. Develop layered defense-in-depth strategies that assume future classes of attacks may not be detectable by today’s countermeasures.Balancing AI Utility with Security: Unresolved Risks
Despite the best efforts of vendors and researchers, significant unresolved risks loom over the enterprise use of AI-powered assistants. Many of these vulnerabilities are emergent properties of how language models process, synthesize, and respond to composite contexts. As LLMs grow more sophisticated and are entrusted with increasingly sensitive workflows—legal review, financial analysis, IT administration—the potential fodder for attackers only multiplies.Security experts broadly agree that fixes at the application layer, while important, cannot alone address the “fundamental architectural issues” created by exposing agentic AI to untrusted domains. Risk mitigation will depend on a combination of:
- Segregating sensitive workflows from externally facing inputs;
- Developing AI safety models that can detect and defuse context injection attempts in real-time;
- Building cross-industry standards for MCP schema validation and usage;
- Continuous red-teaming, simulation, and proactive vulnerability research, including engagement with the external security community.
Critical Analysis: Where Are We Most Vulnerable?
The EchoLeak incident should serve as a wake-up call for organizations and vendors alike. Despite Copilot’s restricted interface—being accessible only to organization employees—the very nature of its context blending makes it susceptible to indirect, invisible domain-crossing attacks.Notable Strengths:
- The rapidity with which Microsoft responded highlights a strong, ongoing focus on AI security, reinforced by robust vulnerability disclosure programs.
- Disclosure of both EchoLeak and MCP-based threats by independent security researchers demonstrates healthy engagement and transparency in the cybersecurity ecosystem.
- Zero-click AI vulnerabilities break existing security paradigms by removing the human from both the threat chain and the opportunity to intervene.
- Current industry standards for MCP and tool schemas are not sufficiently mature to prevent Full-Schema or Advanced Tool Poisoning attacks.
- DNS rebinding and SSE-based exploits demonstrate that protocol-level assumptions can undermine even the most well-defended applications.
- There remains a distinct lack of real-time, context-sensitive defenses capable of detecting and neutralizing scope violations within an LLM context.
Conclusion: The Road Ahead for Secure Enterprise AI
The exposure and remediation of vulnerabilities like EchoLeak represent more than isolated incidents—they are harbingers of a new era in cybersecurity, where the unique blend of artificial intelligence, automation, and cross-domain data flows creates both unparalleled opportunity and risk. For every AI-powered productivity boost delivered by Microsoft 365 Copilot or agent frameworks enabled by MCP, there is an attendant need for architectural vigilance, continuous monitoring, and collaboration between vendors, enterprises, and the research community.Patching the latest CVE or following today’s best practices will not be enough. Organizations must internalize a new security mindset, where every interaction between human, AI, and infrastructure is scrutinized for hidden channels of exploitation. The sheer scale, speed, and subtlety of zero-click AI vulnerabilities demand nothing less than a wholesale reevaluation of both technical and organizational safeguards.
With threats growing ever more sophisticated, the good news is that awareness is building—and with it, a clearer path toward resilience in the face of evolving, AI-driven cyber adversaries.
Source: The Hacker News Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction