Microsoft’s Azure platform absorbed and neutralized an unprecedented distributed denial‑of‑service (DDoS) blitz that peaked at 15.72 terabits per second (Tbps) and roughly 3.64 billion packets per second (pps) on October 24, 2025, a volumetric assault Microsoft attributes to the rapidly growing Aisuru IoT botnet.
Azure’s engineering teams say the attack targeted a single public IP in Australia and was automatically detected and mitigated by Azure DDoS Protection without customer-visible downtime. Microsoft describes the traffic as multi‑vector, dominated by high‑rate UDP floods launched from more than 500,000 source IP addresses, and notes the campaign included minimal source spoofing and random source ports, features that aided traceback and provider enforcement. This incident sits amid a rash of escalating, hyper‑volumetric DDoS events in 2025. Security vendors and mitigation providers recorded multiple record‑setting attacks in recent months — including an event Cloudflare says peaked at 22.2 Tbps and 10.6 billion pps in September — illustrating that volumetric DDoS sizes are climbing at internet scale. This article summarizes the verified technical facts, cross‑references independent reporting, analyzes implications for cloud customers and network operators, and offers practical hardening and operational recommendations for organizations preparing for high‑volume DDoS threats ahead of seasonal peaks.
Source: TechWorm Microsoft Confirms 15.7 Tbps DDoS Attack On Azure
Background / Overview
Azure’s engineering teams say the attack targeted a single public IP in Australia and was automatically detected and mitigated by Azure DDoS Protection without customer-visible downtime. Microsoft describes the traffic as multi‑vector, dominated by high‑rate UDP floods launched from more than 500,000 source IP addresses, and notes the campaign included minimal source spoofing and random source ports, features that aided traceback and provider enforcement. This incident sits amid a rash of escalating, hyper‑volumetric DDoS events in 2025. Security vendors and mitigation providers recorded multiple record‑setting attacks in recent months — including an event Cloudflare says peaked at 22.2 Tbps and 10.6 billion pps in September — illustrating that volumetric DDoS sizes are climbing at internet scale. This article summarizes the verified technical facts, cross‑references independent reporting, analyzes implications for cloud customers and network operators, and offers practical hardening and operational recommendations for organizations preparing for high‑volume DDoS threats ahead of seasonal peaks.The attack, verified: what we know and how we confirmed it
Key verified facts
- Peak throughput: 15.72 Tbps. This figure comes directly from Microsoft’s Azure Infrastructure blog post and is corroborated by independent reporting from industry outlets that covered Microsoft’s disclosure.
- Peak packet rate: ~3.64 billion packets per second (pps). Microsoft provided the packets‑per‑second metric as part of its technical summary.
- Origin: Aisuru botnet, a Turbo‑Mirai–class IoT botnet built from compromised routers, cameras and DVR/NVR devices. Microsoft traced the attack to Aisuru in its blog post.
- Scope of sources: >500,000 source IPs participated in the attack, spanning multiple regions. Microsoft’s telemetry is the authoritative source for this count.
Cross‑verification and independent confirmation
To avoid single‑source dependence, the above figures were cross‑checked against contemporaneous reporting and independent security research:- Independent technology press reporting reproduced Microsoft’s metrics and added operational context about the target (single Australian IP) and mitigation outcome (no customer outages).
- Coverage of the broader DDoS landscape — including Cloudflare’s widely reported 22.2 Tbps mitigation in September — confirms a clear upward trend in attack volume and packet rates during 2025, lending context to Microsoft’s characterization that attackers are “scaling with the internet itself.”
Technical anatomy: how the attack was constructed
Attack vector and protocol profile
Microsoft’s post and supplemental reporting characterise the campaign as a series of extremely high‑rate UDP floods aimed at exhausting bandwidth and packet‑processing capacity at the targeted ingress. UDP flood attacks saturate links and force routers and firewalls into heavy packet‑processing loads because they are stateless and cheap for attackers to generate at scale. The combination of very high bps and very high pps makes such floods especially difficult to absorb without purpose‑built mitigation capacity. Key technical attributes Microsoft called out:- Multi‑vector campaign dominated by UDP bursts.
- Minimal source spoofing and random source ports, which—counterintuitively—helped defenders by simplifying attribution and enabling enforcement actions by upstream providers. Microsoft explicitly notes these features.
- Extremely high pps (packets per second) load, which stresses routers, stateful firewalls and packet inspection engines more than raw bps alone. Large pps values can overwhelm network ASICs and software paths even when bits per second look manageable.
The botnet’s mechanics: Aisuru’s toolbox
Research from security vendors and specialized trackers has profiled Aisuru as a Turbo‑Mirai–derived botnet that infects consumer‑grade routers, IP cameras, and NVR/DVR devices, then orchestrates synchronized floods. XLab (Qi’anxin’s research arm) and other analysts documented a rapid expansion of Aisuru after operators compromised a firmware update distribution mechanism used by a low‑cost router vendor, which multiplied the botnet’s effective scale. These findings explain how Aisuru rapidly reached hundreds of thousands of nodes. Aisuru capabilities reported by researchers include:- High‑throughput UDP/TCP flood generation.
- Use of GRE tunnelling and chained C2 relays in some operations to distribute load and complicate simple takedowns. XLab detailed GRE tunnels in prior Aisuru orchestrations.
- Some variants and ancillary services offered by the botmasters apparently support credential stuffing, proxying / residential proxying, AI‑driven scraping and spam/phishing operations — meaning the infrastructure can be repurposed beyond pure DDoS-for‑hire. Public reporting and vendor analyses note commercialization of access.
On the provenance of infected devices and node counts (why numbers vary)
Estimates of how many devices Aisuru controls have ranged in press reporting from roughly 100,000 following the Totolink incident to ~300,000 or higher as XLab and some telemetry teams tracked subsequent infections. Microsoft’s mitigation telemetry for the October 24 event indicates more than 500,000 source IPs were used in that particular campaign — note this is source IPs observed in the attack, not a direct, auditable inventory of unique infected devices. Source pools can include dynamic consumer IPs, NAT aggregations, reused addresses, or transient spoofing/artifacts, so node‑count estimates will naturally vary between researchers. Treat raw “bot count” figures with caution unless the methodology is disclosed.Why this attack matters: implications for cloud, ISPs and enterprises
The new baseline for volumetric attacks is rising
The combination of faster residential broadband (fiber to the home), more powerful consumer routers and increasingly capable IoT devices means the internet’s available offensive bandwidth is increasing. Microsoft and other providers warn that baseline DDoS volumes have shifted upward — large, short‑duration surges that previously would have been once‑in‑a‑year anomalies are now regular events. The Cloudflare mitigations in 2025 (including 11.5 Tbps and later 22.2 Tbps incidents) show a sustained trend. That trend raises three structural concerns:- Mitigation capacity arms race: Providers must continually expand scrubbing capacity, peering, and global distribution to absorb spikes without collateral damage.
- Network equipment limits: Routers, firewalls and middleboxes have finite PPS handling limits; even if bps is absorbed, high pps can still blow out control and software planes.
- Collateral impact: Attack routing and scrubbing techniques that filter at scale can disrupt legitimate traffic unless mitigations are nuanced; cloud operators must balance blunt filtering with precision.
Cloud providers’ role and liability
Large cloud vendors operate the scale and global edge footprint needed to defend critical infrastructures, but that centralisation concentrates responsibility and risk. Microsoft’s ability to mitigate the 15.72 Tbps event without customer downtime demonstrates capability — yet it also underscores that only a handful of providers currently possess the global capacity to absorb hyper‑volumetric assaults. This amplifies scrutiny from regulators and customers over SLAs, transparency, and operational practices. Recent platform incidents (both attack mitigation and misconfiguration outages) have drawn attention to how much modern business continuity depends on a small set of control planes.ISPs and last‑mile networks are the weak link
Aisuru’s infection base is concentrated in residential ISPs and consumer devices. That places significant onus on service providers and CPE vendors to ship secure firmware, accelerate patching, and block abusive traffic at source. Microsoft’s blog noted that the attack’s properties (minimal spoofing, random source ports) helped make traceback and enforcement easier for providers — but blunting the botnet at its origin remains the most durable defence.How Azure mitigated the attack (what worked)
Microsoft’s public statement outlines several defensive factors that prevented customer impact:- Global, distributed scrubbing across Azure’s DDoS Protection fabric detected the multi‑vector campaign and initiated mitigation automatically. The scale and distribution of Azure’s edge allowed malicious traffic to be filtered and redirected before it reached customer workloads.
- Adaptive detection and continuous monitoring enabled the system to distinguish sudden UDP bursts from legitimate traffic patterns. Microsoft emphasised that real‑time automation was the key to neutralizing the volume without human intervention.
Notable strengths and limitations of Microsoft’s response (critical analysis)
Strengths
- Scale and automation: Azure’s demonstrated scrubbing capacity and automated mitigation prevented visible customer impact despite a rare, record‑level throughput. That is a substantive engineering achievement.
- Traceability: The attack’s limited spoofing and randomized source ports meant that network operators could perform effective traceback and enforcement, improving the chances of takedown and remediation at upstream ISPs. Microsoft emphasized this in their write‑up.
- Public transparency: Microsoft published a technical account promptly, aiding the community’s situational awareness and encouraging collaborative mitigation.
Limitations and open questions
- Root‑cause remediation: Mitigating an active attack protects availability, but the systemic problem — massive pools of vulnerable IoT/CPE devices — persists. The industry still lacks scalable, enforceable mechanisms to prevent botnet build‑up at firmware distribution points or to compel CPE vendors toward secure update channels. Independent researchers have linked Aisuru growth to compromises in firmware update flows for small router vendors; these structural weaknesses are not solved by cloud scrubbing alone.
- Dependence on a few scrubbing providers: The internet’s defensive posture relies on a handful of hyperscalers and scrubbing networks with sufficient capacity. If those providers themselves were simultaneously targeted or faced other outages, the second‑order effects could be severe. The ongoing concentration of mitigation capacity invites systemic risk.
- Attribution and takedown hurdles: While Microsoft identified Aisuru as the origin, precise attribution between botnets, malware variants and operator groups is complex and sometimes disputed between vendors. Some claims about specific numbers of compromised devices or named actors remain based on partial telemetry and should be treated cautiously.
Practical recommendations — what organizations should do now
The approach below is tactical and prioritized for enterprises, cloud customers, and ISPs preparing for elevated DDoS risk during peak traffic seasons.- Confirm and exercise incident response: run a tabletop that simulates a volumetric UDP flood and verify that escalation paths and contact details for cloud vendors and upstream ISPs are current.
- Subscribe to cloud DDoS protection: enable Azure DDoS Protection Standard (or equivalent) for internet‑facing workloads and ensure your protection profile is tuned for the services’ traffic patterns.
- Implement multi‑path ingress: architect critical services across multiple network providers and CDNs to avoid single‑point ingress failures.
- Harden origins and rate‑limit: where possible, apply rate limits and per‑flow state sharing at origin pairs and use WAF rules to reject obvious flood patterns.
- Inventory and patch CPE: for managed networks, mandate secure update practices for routers, cameras and NVRs; for consumer users, run awareness and remediation campaigns to replace or update vulnerable equipment.
- Test failover and telemetry: ensure your monitoring can handle extremely high packet rates and that alert thresholds are aligned with automated mitigation actions.
- Pre‑negotiate emergency peering and scrubbing: enterprise security and network teams should have pre‑arranged contacts with scrubbing services and ISPs to activate upstream filters rapidly.
- Legal and compliance preparation: gather evidence‑collection procedures in advance; traceable logs (NetFlow, packet captures, IDS/IPS logs) are essential if pursuing law‑enforcement action or ISP enforcement.
Sectoral responsibilities: vendors, ISPs and regulators
- CPE vendors must adopt secure firmware distribution patterns (signed updates, content‑security checks, reproducible update servers) and provide timely security patches. The Aisuru history shows how a single compromised update mechanism can multiply infection rates rapidly.
- ISPs and access providers need improved telemetry and automated sinkholing / blackholing capabilities to cut off botnet traffic close to the source without harming legitimate customers. Collaborative filtering with upstream scrubbing centres should be standard.
- Regulators and standards bodies should incentivize or mandate minimum security baselines for consumer routers and IoT vendors—particularly in jurisdictions where low‑cost hardware is sold en masse without firmware security assurances.
What remains uncertain — cautionary flags
- Discrepancies in node counts: different research teams report different botnet sizes (100k, 300k, 500k+). Those differences come from differing telemetry sources and counting methodologies. Microsoft’s >500k figure refers specifically to source IPs observed in the October 24 event; it is not an automatic confirmation of the global botnet population at a given moment. Treat raw “bot count” numbers as estimates unless the methodology is published.
- Operator attribution and intent: vendor reports suggest Aisuru operators monetize access for DDoS‑for‑hire and other abuse, but actor identification beyond aliasing remains fraught; naming alleged individuals or mapping intent precisely often exceeds what public telemetry can prove. Use caution before treating operator names or motivations as settled fact.
- Long‑term mitigation impact: mitigation today prevents immediate outages, but persistent botnet growth means similar or larger events are likely to recur unless structural fixes (CPE security, firmware signing, coordinated ISP action) are implemented at scale. The trendline is clear; the timetable for structural remediation is not.
The strategic outlook — predicting the near future
A realistic view is that DDoS attacks will remain an arms race:- Attackers will continue to exploit insecure CPE and IoT ecosystems to increase available offensive bandwidth.
- Mitigation providers and hyperscalers will invest to expand global scrubbing capacity, but the economic and operational costs will grow with attack sizes.
- Short, hyper‑volumetric surges — “pulse” attacks that last seconds but spike to extremely high Tbps and Bpps — will be favored by adversaries because they can test defenses and inflict brief but damaging congestion. Recent record events support this pattern.
Conclusion
The October 24, 2025 assault on Azure was one of the largest cloud‑observed DDoS events to date at 15.72 Tbps and ~3.64 billion pps, and Microsoft’s automated defenses successfully neutralized it without customer impact. That success is both reassuring and sobering: reassuring because modern cloud DDoS platforms can and did absorb extraordinary volumes; sobering because the incident confirms that the baseline for DDoS power is rising thanks to proliferating vulnerable IoT and CPE devices and faster residential connectivity. Companies should use this incident as a prompt to validate mitigation plans, enable cloud DDoS protections, test operational readiness before seasonal traffic surges, and pressure vendors and ISPs to fix the upstream weaknesses that let botnets like Aisuru grow. Without those structural fixes, mitigating each new giant attack will remain an expensive and reactive cycle — one that will continue until the industry moves to harden the Internet’s weakest links.Source: TechWorm Microsoft Confirms 15.7 Tbps DDoS Attack On Azure