Cursor’s Mermaid-based diagram renderer in certain Cursor releases can be induced to fetch attacker-controlled images, creating a low‑noise exfiltration channel when combined with prompt injection — a vulnerability tracked as CVE-2025-54132 that has been fixed in Cursor 1.3 (with later follow-ups addressing bypasses).
Mermaid is a popular text-to-diagram renderer that lets users embed images and external resources inside diagrams. Cursor — an AI-focused code editor — used Mermaid to render inline diagrams inside its chat/assistant UI. When a rendered diagram contains an external image URL, the client or rendering service will fetch that image. CVE-2025-54132 leverages that normal behavior: if an attacker can cause the assistant to produce (or an application to ingest) Mermaid markup that references attacker-controlled images, the client will perform network requests to those image URLs and, in some configurations, include or leak sensitive data in those requests. Multiple CVE databases and vendor advisories describe the issue as a server‑side request‑forgery / image‑fetch exfiltration vector that requires prompt injection or other model/context manipulation to trigger.
This is not a classical remote code execution bug in a renderer; it is an information‑exfiltration vector that depends on the ability to influence what the assistant or renderer outputs (prompt injection) or to induce the client to render untrusted Mermaid markup. That constraint is important: the attacker must first control or poison content the assistant consumes, or otherwise cause the assistant to emit diagram code containing attacker URLs.
Source: MSRC Security Update Guide - Microsoft Security Response Center
Background / Overview
Mermaid is a popular text-to-diagram renderer that lets users embed images and external resources inside diagrams. Cursor — an AI-focused code editor — used Mermaid to render inline diagrams inside its chat/assistant UI. When a rendered diagram contains an external image URL, the client or rendering service will fetch that image. CVE-2025-54132 leverages that normal behavior: if an attacker can cause the assistant to produce (or an application to ingest) Mermaid markup that references attacker-controlled images, the client will perform network requests to those image URLs and, in some configurations, include or leak sensitive data in those requests. Multiple CVE databases and vendor advisories describe the issue as a server‑side request‑forgery / image‑fetch exfiltration vector that requires prompt injection or other model/context manipulation to trigger. This is not a classical remote code execution bug in a renderer; it is an information‑exfiltration vector that depends on the ability to influence what the assistant or renderer outputs (prompt injection) or to induce the client to render untrusted Mermaid markup. That constraint is important: the attacker must first control or poison content the assistant consumes, or otherwise cause the assistant to emit diagram code containing attacker URLs.
What the advisory and databases say
- The canonical CVE summary (NVD / MITRE mirrors) describes the defect as permitting exfiltration via image fetches when Mermaid is used to render diagrams inside Cursor chat. It explicitly states the exploit requires prompt injection or similarly malicious input and that Cursor releases prior to 1.3 are affected; Cursor 1.3 contains the initial fix.
- OpenCVE / CVEFeed and other aggregators list the weakness under CWE-918 (SSRF) and note a range of CVSS assessments (some vendors rate it medium; some initial aggregators showed higher scores depending on assumptions). They also document later follow-up records that describe bypasses and incremental fixes (leading to later fixed releases beyond 1.3).
- Independent national CERTs (for example, INCIBE) classify the weakness in the SSRF family and publish guidance that aligns with the vendor advisory: apply the Cursor update, sanitize diagram inputs, and restrict automatic external fetches during assistant rendering.
How the attack works (technical breakdown)
1. The primitives: Mermaid + image URLs + assistant output
Mermaid diagrams can include embedded images via standard Markdown/image syntax or Mermaid’s image directives. When the assistant (or the application rendering the assistant’s output) converts that markup into rendered HTML, the client or a server-side renderer issues HTTP(S) requests to fetch those images.2. The trigger: prompt injection or poisoned content
To exploit the fetch, an attacker must get the AI assistant to output Mermaid markup that contains attacker-controlled image URLs. That can be done in several ways:- Injecting hidden instructions into content the assistant ingests (prompt injection).
- Supplying a malicious file, code snippet, or web page that the assistant parses and summarizes.
- Manipulating model hallucination/backdoor behavior so the model itself emits the malicious diagram markup.
3. The exfiltration channel: image fetches
Once the client renders the diagram, it fetches images from the referenced URLs. An attacker-controlled server can observe the timing, query parameters, headers, referer, or even the request path. Those observable elements can encode secrets: session identifiers, short tokens, or encoded characters requested in sequence. Because image fetches are legitimate browser activity, this channel can be surprisingly stealthy if platform controls allow external image fetches in assistant-rendered chat UI.4. Amplification and automation
A malicious model or a prompt-injected assistant can programmatically emit many small image URLs (1×1 pixel URLs, unique per-character), converting each fetch into a symbol in a covert channel. This pattern is similar to previously documented exfiltration techniques that used image proxies or signed CDN URLs to hide the external endpoint behind trusted infrastructure — a technique that has been abused in other assistant contexts.Exploit preconditions and real-world feasibility
This vulnerability is not directly exploitable at will from the network. A successful attack depends on conditions beyond a remote attacker’s immediate control:- The attacker must be able to influence assistant input or model context (prompt injection), or cause a user to open content that the assistant will ingest and summarize.
- The target environment must allow the assistant or client to fetch external images without restrictive CSP or egress filtering.
- In many deployments, the assistant executes with the privileges or access of the calling user; that means the attack can only exfiltrate what the assistant can read or what the user’s environment exposes.
Cross‑verification and public evidence
Two independent sources that describe the load‑bearing facts:- The NVD / MITRE–mirrored CVE entry documents the Cursor/Mermaid image fetch vector and the remediation version 1.3.
- OpenCVE / CVEFeed and vendor advisories (GitHub security advisory GHSA-43wj-mwcc-x93p referenced in CVE mirrors) provide the same high‑level description and list the CWE as SSRF / Server-Side Request Forgery. They also capture updates where further bypasses were found and fixed in subsequent Cursor/Mermaid updates.
Why this matters for WindowsForum readers and enterprises
- Many development teams, support staff, and cloud workflows use AI assistants and integrated editors such as Cursor as part of their daily workflows. Those tools often have access to private repos, environment variables, or secrets through connected connectors or workspace context.
- Image-based exfiltration is low-bandwidth and low-noise — it is well suited to stealing short high‑value artifacts (API keys, tokens, service account secrets) that attackers reuse.
- In enterprise settings where assistants are allowed to read private repositories, internal docs, or CI logs, a single successful injection can leak credentials that cascade into broader compromise.
Practical mitigation and remediation — prioritized checklist
Apply the following defense-in-depth playbook immediately and during the next maintenance window.- Patch first (primary remediation)
- Upgrade Cursor to at least the release that contains the fix (initially 1.3). Confirm with vendor release notes whether additional follow-up patches are required (some trackers documented additional bypass fixes in later releases such as 1.7). Validate the exact version mapping for your deployment before rolling out.
- Reduce automatic external fetches in assistant UI
- Configure the assistant or the rendering client to not auto-fetch external images or to treat all external resources as untrusted by default.
- Where image rendering is required, enforce a strict Content Security Policy (CSP) that restricts allowed origins or forces usage of a vetted image-proxy service.
- Harden the image proxy and egress
- If you use a proxy (enterprise caching/proxying) for third-party images, ensure the proxy strips identifying query parameters, binds signatures to origin, and logs requests for audit.
- Enforce network egress controls to prevent untrusted endpoints from being reached by assistant-rendered content, or require explicit allowlisting of known good domains.
- Sanitize and validate Mermaid inputs
- Do not allow untrusted or user-submitted Mermaid markup to be rendered in high‑privilege contexts without sanitization.
- Implement server-side sanitizers that remove image directives or convert them into safe placeholders for later manual review.
- Limit assistant access and privilege scope
- Apply least privilege to assistant connectors: only grant access to repos, secrets, and systems that are strictly necessary.
- For high-value teams (devops, cloud admins), temporarily disable assistant features that auto-render content from untrusted external sources until mitigations are in place.
- Rotate and assume compromise for exposed secrets
- If there’s any possibility that secrets were visible to the assistant during the exposure window, rotate those credentials and audit access logs.
- Treat short-lived tokens as potentially compromised if they were present in any workspace the assistant could access.
- Monitor and detect
- Add detections for unusual patterns of outbound image requests from assistant UI renderers: many short-lived 1×1 pixel requests, repetitive unique‑ID paths, or sequences of small image fetches originating from user clients or rendering processes.
- Correlate rendering logs, assistant prompt context, and network egress telemetry to detect suspicious exfiltration attempts.
- Apply content and prompt vetting
- Pre-scan any content the assistant ingests (PRs, files, web pages) for hidden content, invisible comments, or anomalous image URL patterns. Many real-world assistant attacks hide instructions in non-visible parts of content; scanning for these patterns reduces prompt-injection risk.
Detection and hunting guidance
- Network egress telemetry: look for outbound HTTP(S) GET requests to unfamiliar domains that occur immediately after assistant responses containing diagrams. Unique query strings or sequential request patterns are strong indicators.
- Client-side logs: the rendering client should log every external resource fetch initiated during assistant rendering; retain those logs for at least 30 days for forensic reconstruction.
- Assistant audit trail: log full prompt/context and model outputs bound to user sessions. Correlate suspicious outputs that contain long lists of image URLs or many small image embeds with outbound requests.
- Anomaly detection: flag sessions where the assistant emits many images that are all 1×1 or return image sizes unusual for normal content. These are commonly used to encode per-character exfiltration.
- Egress blocking hits: monitor for blocked egress attempts to unknown hosts; repeated blocked attempts after a rendered output can indicate an attempted exfiltration that was prevented by network controls.
Threat model, limitations, and residual risk
- Attack complexity: moderate. The hardest part is inserting or controlling content the assistant ingests without detection. Skilled attackers targeting high-value environments can do this via poisoned PRs, malicious uploads, or compromised model supply chains.
- Likelihood of mass exploitation: low-to-moderate. Because prompt injection needs some foothold, widespread scanning and exploitation are less likely than targeted attacks. But that does not reduce risk for critical teams or automation pipelines where the assistant can access secrets.
- Residual risk after patching: Patches mitigate known vectors, but the fundamental design interplay — assistant output that can trigger client fetches — remains a vector class. New bypasses or creative encodings may surface; follow-up patches (and vendor advisories) have already documented additional bypasses and fixes, so continuous monitoring of vendor advisories is required.
Lessons learned and broader implications
- Presentation features are attack surface. Rich rendering features (images, iframes, proxies) are not just UI conveniences — they can be abused as covert channels when combined with agentive systems.
- Trusted proxies are attractive to attackers. When platforms proxy external content (to apply CSPs or caching), that trusted infrastructure can be repurposed into a covert egress channel unless the proxy enforces strong binding and request verification.
- The assistant + DOM model is a compound risk. Many incidents in 2024–2025 show that assistant integrations inadvertently elevate the impact of web-style vulnerabilities because the assistant can ingest and re-render content programmatically, bridging otherwise independent trust boundaries. Similar exfiltration patterns have been documented with Copilot and image-proxy abuse.
Recommended long‑term controls for platform teams
- Treat any user-constructed markup as untrusted: automatically sandbox or refuse to auto-render external resources for assistant output unless the resource origin is allowlisted and the request is constrained.
- Harden image proxies: require per-request, short-lived signatures bound to the original context and deny requests missing the expected referer or context tokens.
- Design assistant UIs to separate generated content from trusted UI chrome: make provenance explicit so users and downstream automation can distinguish between system-owned content and model-generated suggestions.
- Apply strict least privilege to assistant connectors and log every data access with an immutable audit trail.
- Offer enterprise settings to disable remote resource fetching for assistant-rendered content and to pre-approve domains for visual assets.
Conclusion
CVE-2025-54132 is a practical example of how seemingly mundane features — image embeds in diagrams — can be weaponized when combined with agentive systems and prompt‑injection techniques. The core fix (patch Cursor / Meridian rendering components) is necessary but not sufficient: defenders must also adopt network controls, rendering hardening, prompt sanitization, and monitoring to fully manage the risk. Treat this incident as a reminder that modern application security must consider presentation-layer exfiltration alongside classical memory and code-execution flaws, and that layered defenses — patching, configuration, egress controls, and logging — are essential to reduce both likelihood and impact.Source: MSRC Security Update Guide - Microsoft Security Response Center