On October 24, Microsoft Azure’s automated DDoS protection neutralized an unprecedented, multi‑vector flood that reached a peak of 15.72 terabits per second (Tbps) and nearly 3.64 billion packets per second (pps) against a single public IP in Australia — an event Azure says it mitigated without any customer downtime.
The attack was attributed to the Aisuru family of Mirai‑variant IoT botnets, a rapidly evolving threat that has been responsible for multiple record‑setting volumetric and packet‑rate incidents in 2025. Aisuru’s operations primarily rely on compromised consumer devices — home routers, CCTV/DVRs and similar CPE — often hosted inside residential ISP address spaces. Industry telemetry shows this botnet and related “TurboMirai” variants have produced attacks in the 20–30 Tbps class and pushed packet rates into the billions per second. This Azure incident joins a string of hyper‑volumetric events observed across the industry in 2025, including a later publicized attack that Cloudflare reported at 22.2 Tbps and 10.6 billion pps, underscoring a fast‑moving escalation in both bandwidth and per‑packet aggression.
Industry research highlights broader architectural dependencies and capabilities required to counter such events: large threat intelligence feeds, per‑flow telemetry at scale, and cooperation with ISPs to remediate infected CPE devices. Netscout’s post‑incident analysis emphasizes that botnets like Aisuru can generate multi‑Tbps floods and multi‑gpps packet rates that will overwhelm poorly instrumented networks and break chassis line cards in carrier gear if left unchecked.
Microsoft’s October 24 mitigation stands as both a technical victory and an urgent reminder: cloud defenders can blunt even the largest current assaults when scrubbing capacity, automation and telemetry are correctly combined, but the broader ecosystem — device makers, ISPs, enterprises and regulators — must move faster to reduce the raw attack surface that enables this new era of hyper‑volumetric DDoS.
Source: Red Hot Cyber Microsoft Azure blocks a 15.72 terabit per second DDoS attack
Background
The attack was attributed to the Aisuru family of Mirai‑variant IoT botnets, a rapidly evolving threat that has been responsible for multiple record‑setting volumetric and packet‑rate incidents in 2025. Aisuru’s operations primarily rely on compromised consumer devices — home routers, CCTV/DVRs and similar CPE — often hosted inside residential ISP address spaces. Industry telemetry shows this botnet and related “TurboMirai” variants have produced attacks in the 20–30 Tbps class and pushed packet rates into the billions per second. This Azure incident joins a string of hyper‑volumetric events observed across the industry in 2025, including a later publicized attack that Cloudflare reported at 22.2 Tbps and 10.6 billion pps, underscoring a fast‑moving escalation in both bandwidth and per‑packet aggression. Why this matters: a short primer on Tbps vs pps
Volumetric magnitude (Tbps) and packet rate (pps) stress different parts of the internet stack. High Tbps floods saturate bandwidth and force bandwidth‑level filtering and rerouting; extremely high pps attacks hammer routing and server CPUs, line cards, and stateful network appliances because every single packet requires processing, even if that processing is minimal.- Bandwidth (Tbps) attacks can be mitigated by capacity and volumetric scrubbing, but they can still overwhelm last‑mile or carrier interconnects if attackers coordinate large, widely distributed sources.
- Packet per second (pps) attacks are often the harder problem: they can break routers and firewalls long before raw bit capacity is exhausted because the per‑packet work (interrupts, lookups, counters) is expensive. Azure’s October 24 target saw both extremes — massive bandwidth and immense pps — making it a textbook “hyper‑volumetric” test.
The attack: anatomy and observed characteristics
What happened on October 24
Azure’s published account describes a multi‑hour, automated campaign aimed at a single public IP in Australia that peaked at 15.72 Tbps and ~3.64 billion pps. The traffic was predominantly UDP floods using randomized source ports and limited source IP spoofing, which suggests the botnet relied on large numbers of legitimately routable infected hosts rather than spoofed/reflective amplification. Azure says its automated DDoS defenses detected the anomaly and applied adaptive filtering and scrubbing in real time, allowing customer workloads to continue serving legitimate users.Who was behind it
Microsoft and multiple industry analysts point to the Aisuru botnet — a TurboMirai‑style Mirai derivative — as the origin. Aisuru’s distinguishing characteristics are scale (hundreds of thousands of infected CPEs), use of residential ISP address space, and a preference for high‑rate UDP/TCP/GRE floods that are tuned for both bandwidth and pps impact. Netscout, KrebsOnSecurity and other investigative sources have documented Aisuru’s recent history of record attacks, including brief, experimental floods that reached even higher instantaneous peaks.Attack techniques to note
- High‑speed UDP floods targeting random ports — maximizes wasted work on the target stack.
- Minimal spoofing — most attack traffic came from unique, routable source IPs, enabling high pps while complicating simple traceback but also allowing ISPs to correlate and remediate infected subscribers.
- Use of residential ISP hosts en masse — amplifies impact across provider network backbones and produces “outbound” congestion that harms innocent third parties.
How Azure defended (and what it proves)
Microsoft credits its globally distributed scrubbing fabric, continuous telemetry and automated mitigation rules with absorbing and filtering the flood without service loss. Technical elements called out in Azure’s account include:- Global scrubbing centers that ingest, analyze, and drop malicious flows close to origination.
- Automated detection based on baselining and anomaly detection to kick mitigation into gear without human intervention.
- Adaptive filtering that discriminates between legitimate and malicious flows in real time, preserving good traffic while rejecting the rest.
Industry research highlights broader architectural dependencies and capabilities required to counter such events: large threat intelligence feeds, per‑flow telemetry at scale, and cooperation with ISPs to remediate infected CPE devices. Netscout’s post‑incident analysis emphasizes that botnets like Aisuru can generate multi‑Tbps floods and multi‑gpps packet rates that will overwhelm poorly instrumented networks and break chassis line cards in carrier gear if left unchecked.
Wider context: the new normal in DDoS
2025 has shown a shocking pace of escalation. Several public incidents pushed the upper bound of what was previously assumed practical:- Cloudflare reported an attack peaking at 22.2 Tbps and 10.6 billion pps, which it mitigated automatically, establishing a new public benchmark for scale.
- Investigations of Aisuru and TurboMirai families show repeated, sometimes experimental bursts above 20 Tbps and isolated spikes approaching 30 Tbps, often short in duration but devastating if unmitigated.
Strengths of the defensive model demonstrated
- Automation at scale — Azure’s mitigation without downtime underscores the need for programmatic, telemetry‑driven defenses. Manual intervention is too slow at multi‑Tbps velocities.
- Distributed scrubbing — filtering traffic close to the source reduces collateral damage to intermediary networks and prevents saturated transit links. Azure’s global fabric was central to the outcome.
- Intelligence sharing and research — public vendor analyses (Netscout, Cloudflare and independent reporters) help map attacker toolchains and device populations used in botnets, enabling ISPs and vendors to prioritize firmware patches and device hardening.
Weaknesses and risks the incident exposes
While the mitigation was successful, the event also highlighted systemic and residual risks:- Reliance on provider scrubbing — many organizations lack independent mitigations and must rely on cloud or anti‑DDoS vendors. This concentration of defensive capability is effective, but it creates single points of dependency and potential vendor lock‑in.
- Upstream and ISP impact — when botnets use residential devices en masse, the upstream ISP aggregate links can experience outbound churn and degradation. In some earlier Aisuru incidents carriers reported line‑card stress and service disruptions for innocent residential customers. That kind of collateral harm is visible in the telemetry Netscout and ISPs published.
- Device insecurity and patch gaps — Aisuru propagates through poorly secured consumer CPEs and IoT devices, many of which lack update mechanisms or are managed by unaware consumers. Long‑term remediation requires ISPs, manufacturers and regulators to invest in firmware hygiene and secure default configurations.
- Attribution and legal enforcement limits — even when mitigators can trace attack sources to subscriber IPs, the path from technical attribution to enforcement or remediation is long and complicated by transnational jurisdictional issues. Public statements by vendors often omit firm attribution beyond the botnet family.
Practical guidance for Windows admins, site owners and cloud customers
This incident is a wake‑up call for the upcoming holiday shopping season and for any organization exposed to public internet traffic. The following actions are prioritized, practical and achievable.Immediate (0–7 days)
- Confirm your public endpoints are protected by a DDoS provider or have a plan for traffic scrubbing — if you rely on a cloud provider, validate mitigations and failover playbooks.
- Verify monitoring and alerting thresholds — define normal baselines and ensure automated alarms for sudden Tbps/pps anomalies.
- Enable layered protection: ensure you have both network‑layer DDoS protection and an application WAF for Layer‑7 threats.
Short term (1–3 months)
- Deploy or test automated failover and DNS/TLS routing plans to redirect traffic to scrubbing networks.
- Run tabletop exercises and DDoS playbooks with incident response (IR) teams to exercise the process of coordinating with providers and ISPs.
- Harden public‑facing authentication paths and APIs; remove exposed non‑essential services from public networks.
Mid to long term (3–12 months)
- Work with your ISP to identify outbound suppression and subscriber remediation processes — demand hardening of CPE and opt‑in firmware update programs.
- Implement per‑flow and per‑tenant rate limiting and graduated throttling to throttle malicious pps without cutting legitimate traffic.
- Consider multi‑region or multi‑provider architectures for critical services that must remain online under any provider stress.
Technical deep dive: what defenders must measure and build
Scrubbing capacity and distribution
Effectively mitigating multi‑Tbps attacks requires scrubbing capacity that is:- Sufficiently large in aggregate and distributed to avoid concentrating traffic at a few PoPs.
- Able to perform fast, per‑packet classification at extremely high pps.
- Programmatically integrated with global routing (BGP) to steer traffic into scrubbing planes without manual intervention. Microsoft’s own description emphasizes these elements.
Per‑packet processing optimizations
To survive billions of pps, mitigation systems must minimize work per packet:- Use hardware‑accelerated filtering and kernel bypass techniques to avoid CPU interrupts per packet.
- Prioritize stateless filters for initial cuts and escalate to stateful inspection only for suspected traffic that needs deeper analysis.
- Implement early‑drop heuristics to prevent resources from being spent on obviously malicious flows.
Outbound mitigation (ISP responsibilities)
When botnets originate inside ISP address space, mitigation must include outbound suppression:- ISPs need telemetry to detect sudden, large outbound flows per subscriber aggregation and the ability to apply per‑subscriber rate limits or quarantines.
- Coordination with subscriber owners (customers) to remediate infected devices is essential, but it must be accompanied by fast technical containment options to avoid network collateral damage. Netscout’s guidance makes the case for treating outbound suppression as equally important as inbound scrubbing.
Policy and vendor responsibilities
The recurrence and scale of these botnets demand a policy and supply‑chain response:- Manufacturers must ship devices with secure defaults, remove hardcoded credentials, and provide reliable firmware update paths.
- ISPs should implement stronger egress filtering, subscriber notification and quarantine flows.
- Regulators might consider minimum security standards for consumer networking gear and mandatory disclosure requirements for large‑scale botnet infections.
- Cloud and DDoS vendors must continue to invest in automated detection, open telemetry and inter‑vendor sharing to avoid duplication and speed mitigation.
What we still don’t know — and where caution is needed
Public reporting provides firm numbers for this event because Microsoft published a mitigation summary; however, several items remain uncertain or are intentionally withheld by vendors for operational security:- Precise duration and full timeline of the attack from first packet to last scrubbed second are partially summarized by vendors but full packet captures and raw telemetry are not publicly released.
- The exact infection vectors and exploits used to grow Aisuru’s army are still being pieced together by researchers; attribution of the operators behind Aisuru remains tentative in public reporting and should be treated with caution until law‑enforcement or multi‑party attribution is published.
The business impact: cost, trust and the holiday calendar
For any customer who sells directly online or depends on low‑latency services, these events pose real business risk:- Reputational risk — outages during peak shopping days are damaging even if short; customers notice dropped checkouts and failed sessions.
- Operational cost — emergency mitigation, traffic reroutes and legal consultations carry direct costs, while longer‑term mitigation investments (scrubbing subscriptions, multi‑region replication) are an added budget line.
- Insurance and contractual exposure — service level agreements and cyber insurance policies may be impacted by frequency and magnitude of these attacks; legal teams should re‑examine force majeure clauses and readiness for DDoS losses.
Final analysis and takeaway
The October 24 mitigation demonstrates that with enough scale, automation, and distributed scrubbing capacity, hyperscalers can absorb hyper‑volumetric attacks without customer impact. Azure’s account and corroborating industry analyses make clear that the threat landscape has shifted: attackers now routinely target both bandwidth and packet processing limits using botnets made from insecure consumer gear. That achievement does not mean the problem is solved. The attack underlines several urgent needs:- Continued investment in automated, edge‑distributed mitigation.
- A stronger focus on ISPs and device vendors to reduce the pool of exploitable hosts.
- Regulatory and industry cooperation to improve baseline device security and incident response.
- Operational preparedness by enterprises: layered protection, tested runbooks, and contractual clarity with providers.
Checklist: immediate actions for IT teams
- Confirm DDoS and WAF coverage for every public IP and CDN endpoint.
- Test provider mitigation procedures in a non‑production drill.
- Instrument per‑flow telemetry and set pps and Tbps alerts.
- Coordinate with ISPs about outbound suppression and subscriber remediation.
- Harden and update CPE and IoT device inventories; require secure defaults on procurement.
Microsoft’s October 24 mitigation stands as both a technical victory and an urgent reminder: cloud defenders can blunt even the largest current assaults when scrubbing capacity, automation and telemetry are correctly combined, but the broader ecosystem — device makers, ISPs, enterprises and regulators — must move faster to reduce the raw attack surface that enables this new era of hyper‑volumetric DDoS.
Source: Red Hot Cyber Microsoft Azure blocks a 15.72 terabit per second DDoS attack






