A newly disclosed vulnerability in Node.js — tracked as CVE-2024-22025 — allows an attacker who controls a URL passed into the built-in fetch() implementation to cause a Denial of Service (DoS) by driving the process into resource exhaustion through Brotli decompression. In practical terms, untrusted or attacker-controlled URLs can be weaponized as decompression bombs: responses encoded with Brotli cause Node.js’s fetch() to automatically decode the compressed payload into memory, potentially ballooning RAM usage until the process is terminated or becomes unresponsive. This article explains how the issue works, why it matters, how to detect exploitation, and proven mitigation strategies for developers and operators running Node.js workloads.
Node.js has for several releases included a standardized fetch() API similar to the browser Fetch API. The function is convenient for performing HTTP(S) requests in server-side JavaScript and is widely used in microservices, server-side rendering, API proxies, and third-party integration code.
Brotli is a widely used compression algorithm that often produces smaller compressed payloads than gzip for text-based content. Because Brotli yields good compression ratios, servers and CDNs commonly serve resources with Brotli encoding and clients advertise support for it via the Accept-Encoding header.
CVE-2024-22025 arises from the interaction of three facts:
These scenarios highlight an important point: the vulnerability becomes critical in any flow where untrusted actors can influence which URLs the application fetches.
The recommended mitigations are effective but require trade-offs:
The most pragmatic, defensible approach combines short-term operational safeguards (timeouts, container limits, allowlists) with longer-term architectural changes (streaming processing, dedicated fetch workers, policy-driven proxies). Rapid patching when vendor updates are released should remain a priority.
Short-term actions (timeout enforcement, streaming with size caps, container limits) can reduce risk quickly. Medium-term work (architectural separation of untrusted fetches and policy-driven proxies) will eliminate much of the risk without forfeiting legitimate functionality. Ultimately, fast patching and disciplined operational controls are the best defense against this class of decompression-based DoS.
Act now: identify every code path that accepts remote URLs, put tight quotas and timeouts in place, and prepare to deploy vendor patches as they are released. The cost of inaction is straightforward and immediate — full loss of availability for components that handle external URLs.
Source: MSRC Security Update Guide - Microsoft Security Response Center
Background
Node.js has for several releases included a standardized fetch() API similar to the browser Fetch API. The function is convenient for performing HTTP(S) requests in server-side JavaScript and is widely used in microservices, server-side rendering, API proxies, and third-party integration code.Brotli is a widely used compression algorithm that often produces smaller compressed payloads than gzip for text-based content. Because Brotli yields good compression ratios, servers and CDNs commonly serve resources with Brotli encoding and clients advertise support for it via the Accept-Encoding header.
CVE-2024-22025 arises from the interaction of three facts:
- Node.js’s fetch() implementation accepts and automatically decodes Brotli-encoded responses.
- The decompression step can expand a small compressed payload into a very large uncompressed blob.
- When fetch() decodes entirely into memory without size limits or streaming safeguards, an attacker controlling the URL can trigger uncontrolled memory growth.
Overview: How the vulnerability works
Automatic Brotli decoding
Node.js’s fetch() advertises the capability to accept compressed responses; when the response arrives with a Content-Encoding: br header (Brotli), fetch() performs decompression before returning the response body to the application. That behavior mirrors typical browser semantics, where automatic decoding is convenient and expected.Decompression amplification
Compression algorithms can be highly asymmetric: a compressed blob of a few kilobytes can expand into megabytes or gigabytes of uncompressed data if it was crafted to do so. Attackers can intentionally return small Brotli-encoded payloads that decompress into arbitrarily large in-memory structures — a decompression bomb. If the client code blindly asks fetch() to retrieve content from an attacker-controlled URL and the fetch implementation decodes into memory, the client can be forced to allocate enormous amounts of RAM.Untrusted input and server-side context
On server-side platforms, fetch() is often used to fetch URLs that originate from user input (webhooks, callback URLs, remote resource links, image fetchers, SSR templates, etc.). If an application accepts a URL parameter from an external user and passes it directly into fetch(), that creates a direct attack surface: the attacker supplies a URL that points to a server under their control, and that server responds with a small Brotli-compressed payload that decompresses to an enormous size, causing the Node.js process to run out of memory.Technical analysis
Root cause
The root cause is not Brotli or fetch() alone, but the combination of:- Automatic, complete decompression into memory without enforced maximum size.
- Lack of a validation or quota mechanism within the fetch() call to cap or stream large decompressed payloads.
- Common application patterns where untrusted URLs are passed into fetch() without explicit verification.
Behavior under the hood
When fetch() receives a response with Content-Encoding: br, the implementation typically sets up a decompressor and reads the entire response body, returning it as a buffer or text to the caller. If the decompressed size exceeds available memory, standard Node.js behavior will depend on system configuration: the V8 heap may grow until process limits trigger an out-of-memory kill (OOM), or heavy paging may degrade performance until the process is terminated by the host or container runtime.Why streaming doesn't always help
Even when fetch() returns a ReadableStream, decompressors that expand data as they read can still create large internal buffers unless code consumes and discards data quickly or applies quotas. Server code that uses convenience APIs like response.text() or response.buffer() is particularly at risk because those APIs require the entire body to be materialized.Exploit prerequisites and scope
An attacker needs the ability to control the URL passed to fetch(). This can happen in numerous legitimate-looking contexts:- A webhook that accepts remote image/page URLs.
- A microservice that proxies arbitrary URLs.
- Server-side rendering that fetches external CSS/JS based on user-submitted references.
- A URL-shortening service or crawler that fetches user-supplied targets.
- Host a response with a small Brotli-encoded block that decompresses to a very large payload.
- Use content that is synthetically crafted to maximize decompression amplification.
- Reuse the technique to sustain the attack and keep the target process starved of resources.
Exploitation scenarios and examples
Scenario 1 — Public API proxy
A public API provides an endpoint /fetch?url={target} that proxies the content of the target URL. The server directly calls fetch(target) and returns the content. An attacker supplies a URL that returns a small Brotli-compressed payload that expands to many gigabytes. The proxy allocates huge memory, OOMs, and the service becomes unavailable.Scenario 2 — Image retrieval pipeline
An image-processing microservice fetches images from arbitrary URLs supplied by users, using fetch() followed by image decoding libraries. If the response is Brotli-encoded (or mislabelled to trigger decompression) and expands dramatically, the fetch call may allocate excessive buffers before image libraries are invoked, crashing the service.Scenario 3 — Server-side rendering (SSR)
An SSR system pulls remote templates, partials, or user-provided content into rendered pages using fetch(). If an attacker controls the URL for a remote partial, they can use Brotli-compressed content to exhaust the renderer’s memory and disrupt page rendering for other users.These scenarios highlight an important point: the vulnerability becomes critical in any flow where untrusted actors can influence which URLs the application fetches.
Impact and severity
- Availability: The primary impact is a complete loss of availability for the affected Node.js process or service. Memory exhaustion can result in process termination or heavy swapping that renders the instance unusable.
- Scope: The vulnerability affects server-side Node.js deployments that use fetch() to pull content from untrusted sources without size or decompression limits.
- Persistence: Depending on how the application and infrastructure are configured, the denial can be sustained (as long as the attacker continues to supply malicious URLs) or persistent if the process crashes and restarts slowly or fails to restart correctly.
- Exploitability: Exploitation is straightforward whenever an attacker can control the URL parameter. No authentication or complex preconditions are required beyond that control.
Detection and monitoring
Detecting attempts to exploit decompression-based DoS requires both runtime metrics and request-level logging.Key telemetry to monitor
- Memory utilization: sudden spikes in resident set size (RSS) or heap usage correlated with incoming requests.
- Process restarts: increasing frequency of OOM kills or unexpected process crashes.
- Request patterns: multiple requests for fetch-like operations originating from the same client or containing suspicious URL parameters.
- Latency spikes: rapid increases in request latency or increased CPU due to decompression work.
Logging recommendations
- Log the origin of URLs passed into fetch() and the request that triggered the remote fetch (careful to sanitize logs to avoid storing malicious payload content).
- Correlate log lines with memory spikes and process restarts to identify likely attempts.
- Implement alerts when memory usage exceeds a tight threshold and when process restarts exceed normal baselines.
Forensics after an incident
- Capture the actual URL(s) that caused the spike and, if possible, obtain the compressed payload for offline analysis.
- Assess whether other instances or services were impacted, and whether the attacker reused the same URLs.
Immediate mitigations (short-term)
If you cannot immediately apply a vendor patch or Node.js update, apply one or more of these operational and code mitigations to reduce risk.- Never pass untrusted URLs directly into fetch(): validate and sanitize any external URL inputs. Only allow domains or hostnames from a whitelist where possible.
- Use an allowlist / blocklist: restrict fetch targets to known-good origins. Reject or sandbox requests to arbitrary domains.
- Apply request timeouts and AbortController: ensure every fetch() call uses an AbortController with a reasonable timeout to avoid requests lingering and gradually consuming memory.
- Enforce maximum content size: do not call response.text() or response.buffer() without first enforcing a size limit. If using streams, consume incrementally and abort if the cumulative size exceeds a quota.
- Inspect Content-Encoding: treat responses with Content-Encoding: br (Brotli) as potentially dangerous if the source is untrusted; either refuse them or handle them with explicit streaming decompression and quotas.
- Use infrastructure limits: in containerized deployments, enforce memory limits at the container or process manager level so a single request cannot exhaust host memory. Prefer conservative limits and fast restarts.
- Run fetch inside a worker or subprocess: isolate external fetches in a separate process or worker with strict memory limits to contain OOMs.
- Log and throttle repeated requests from the same source: use rate limiting on endpoints that lead to fetch() to prevent sustained exploitation.
Long-term mitigations (code and architectural)
To permanently reduce exposure, adopt these robust design and coding practices.Prefer pull-from-trusted-patterns
Architect systems so that critical server-side operations do not depend on fetching arbitrary external content. When external data is required, use scheduled retrieval by a dedicated, hardened service that validates and sanitizes content.Use streaming decompression with quotas
If you must accept compressed responses, use streaming decompression APIs that allow you to:- Decompress incrementally rather than materializing the entire body.
- Abort decompression when the uncompressed size exceeds a safe threshold.
- Write decompressed content to disk or a bounded buffer rather than memory when necessary.
Sanitize and normalize responses
Before decompression, inspect response headers and metadata. Consider refusing to process responses with suspicious or unexpected content encodings from untrusted origins.Harden libraries and frameworks
If you maintain libraries or frameworks that expose fetch-like convenience methods to application users, ensure those methods:- Default to safe behaviors (timeouts, size limits, disallowing Brotli from untrusted sources).
- Provide clear configuration knobs for administrators to set size/time limits.
Practical code-level guidance
Below are safe design patterns to reduce risk. These are conceptual; adapt them to your Node.js runtime and library versions.1. AbortController + timeout
Always attach an AbortController to fetch() calls with a reasonable timeout to avoid long-running requests:- Create an AbortController for each fetch call.
- Start a timer that calls controller.abort() after the maximum allowed duration.
- Catch the abort error and log the event.
2. Streaming with size cap
Consume response.body as a stream and enforce a maximum uncompressed size:- Pipe the response through a streaming decompressor (that supports incremental output).
- Track cumulative bytes of decompressed data.
- If the cumulative size exceeds your quota (e.g., 10 MB), abort the stream and log an error.
3. Domain allowlist
Reject any user-provided URL that does not match a configured allowlist of hostnames or IP ranges. Use robust parsing and normalization to prevent bypasses (e.g., punycode, URL-encoded hostnames).4. Isolated fetch worker
Offload untrusted fetches to a separate process with strict memory limits. If a worker OOMs, it is restarted without affecting the main process.Hardening at the platform and operations level
- Container memory limits: ensure containers have strict memory limits so an individual container’s OOM does not consume host memory. Use cgroups or container runtime settings to cap memory usage.
- Process supervisors: use lightweight process supervisors or orchestrators that can restart the service promptly when required while avoiding cascading failures.
- Autoscaling and redundancy: deploy horizontally so a single compromised instance does not bring down the entire service. Replace affected instances rather than relying on slow failover.
- Network-level protections: when possible, restrict egress from backend services to known hosts, or route external fetch traffic through a controlled proxy that enforces content and size policies.
- Application-layer WAF/filters: employ filtering proxies that can block suspicious content-encoding headers or enforce maximum response sizes upstream.
Testing and validation
- Fuzz test fetch flows: create tests that simulate malicious compressed payloads and verify that fetch() calls respect timeouts and size limits.
- Penetration testing: include scenarios where testers can control URL parameters and attempt decompression bombs to validate real-world defenses.
- Chaos and resilience testing: intentionally trigger fetch-related errors and process restarts to ensure automated recovery and minimal disruption.
Communication and incident handling
If you detect exploitation:- Isolate the affected instance to stop further resource exhaustion.
- Collect forensic evidence: logs capturing the URL(s), request metadata, and memory/CPU usage metrics.
- Rotate credentials and secrets if external or internal calls used credentials that could be leaked or abused during the incident.
- Notify stakeholders: inform platform and security teams and consider notifying customers if availability was impacted.
- Patch and harden: apply vendor patches or Node.js updates as they become available and implement the mitigations listed above.
Why this problem is not limited to Node.js
While CVE-2024-22025 specifically calls out Node.js’s fetch() behavior, the underlying pattern — automatic, unrestricted decompression of content from untrusted sources — is a general class of vulnerability affecting many languages and platforms. Any environment that:- Accepts URLs from untrusted users,
- Automatically decompresses responses, and
- Materializes full response bodies into memory
Practical checklist for teams (actionable steps)
- Inventory: Identify all places in your codebase where fetch() or equivalent HTTP client calls take URLs originating from untrusted sources.
- Update policy: If possible, update Node.js to vendor-released patched versions as soon as they are available.
- Immediate code fixes:
- Add AbortController timeouts for every fetch().
- Replace response.text() / response.buffer() calls with streaming readers that enforce a size cap.
- Reject responses with Content-Encoding: br for untrusted sources unless explicitly needed.
- Deploy platform controls:
- Apply container memory limits.
- Route external fetches through a hardened proxy that enforces size/time thresholds.
- Monitoring and alerting:
- Add alerts for unusual memory usage and increased process crashes.
- Log every external URL fetched and correlate with memory events.
- Test:
- Run fuzz and pen-tests simulating decompression bombs.
- Validate that the system tolerates malicious inputs without affecting availability.
Final analysis — strengths, trade-offs, and residual risks
CVE-2024-22025 exposes a serious and practical risk in common server-side patterns. Its strengths as an exploit are simplicity and low cost to attackers: control the URL, return a small compressed payload, and the target process may be made unavailable. The vulnerability emphasizes a recurring security principle: convenience features that mirror browser behavior can be dangerous on the server if used with untrusted inputs.The recommended mitigations are effective but require trade-offs:
- Allowlisting reduces flexibility but dramatically lowers risk.
- Streaming and quotas introduce complexity into application code and may require library updates.
- Isolating fetches by running them in separate processes adds operational overhead but provides robust containment.
The most pragmatic, defensible approach combines short-term operational safeguards (timeouts, container limits, allowlists) with longer-term architectural changes (streaming processing, dedicated fetch workers, policy-driven proxies). Rapid patching when vendor updates are released should remain a priority.
Conclusion
CVE-2024-22025 is a potent reminder that seemingly small conveniences — automatic Brotli decoding inside a friendly fetch() API — can become a server-side Achilles’ heel when exposed to untrusted inputs. Organizations running Node.js should immediately audit their usage of fetch(), apply conservative runtime limits, implement strong input validation and allowlisting, and isolate untrusted fetch operations behind dedicated, memory-limited workers or proxies.Short-term actions (timeout enforcement, streaming with size caps, container limits) can reduce risk quickly. Medium-term work (architectural separation of untrusted fetches and policy-driven proxies) will eliminate much of the risk without forfeiting legitimate functionality. Ultimately, fast patching and disciplined operational controls are the best defense against this class of decompression-based DoS.
Act now: identify every code path that accepts remote URLs, put tight quotas and timeouts in place, and prepare to deploy vendor patches as they are released. The cost of inaction is straightforward and immediate — full loss of availability for components that handle external URLs.
Source: MSRC Security Update Guide - Microsoft Security Response Center