Microsoft says Azure's DDoS protection automatically detected and absorbed an unprecedented cloud-scale flood on October 24 that peaked at 15.72 terabits per second (Tbps) and nearly 3.64 billion packets per second (pps) — an event the company describes as the largest DDoS attack ever observed in the cloud and one that originated from the Aisuru IoT botnet.
Aisuru is a Mirai-derived, Turbo‑Mirai–class Internet‑of‑Things (IoT) botnet that first surfaced in mid‑2024 and has since escalated into a recurring generator of hyper‑volumetric attacks. Over 2025 the botnet’s operators repeatedly demonstrated growing capacity, with multiple firms and independent investigators reporting attacks that range into double‑digit terabits per second and multiple billions of packets per second. The October 24 incident targeted a single public IP hosted in Australia and — according to Microsoft — used more than 500,000 unique source IP addresses to deliver high‑rate UDP floods with randomized source ports and minimal source spoofing. Azure’s automated mitigation systems intercepted and filtered the traffic without any reported interruption to protected customer workloads.
This episode arrived amid a broader surge in hyper‑volumetric DDoS activity: industry telemetry shows a sustained rise in extremely large L3/L4 floods and an expanding ecosystem of DDoS‑for‑hire offerings that monetize botnet capacity. Cloud operators, DDoS defenders, and network providers are confronting an arms race where attacker capacity grows as access speeds and device power increase.
Caveat: very large DDoS measurement figures can vary by methodology and telemetry point. Short, high‑intensity test bursts targeted at measurement collections can inflate peak numbers briefly; different observers will report different aggregates depending on vantage point, peering, and whether they measure ingress at the target or egress at intermediate networks. Where possible, multiple independent measurements should be combined to establish a reliable picture.
Key defensive measures employed in cloud mitigation pipelines typically include:
This is a reminder that botnet abuse can be multi‑faceted: attackers can weaponize telemetry aggregation and marketplace features (DNS rankings, domain popularity lists) to sow confusion, gain leverage, or camouflage malicious infrastructure.
Defenders — cloud providers, ISPs, enterprises, manufacturers and policymakers — must treat the Aisuru phenomenon as a systems problem, not only a network problem. That requires coordination, secure default design, upstream filtering, robust automation, and ongoing readiness testing. Until those structural gaps are closed, the game will remain one of scaling defenses to meet scaling attacks — and the record books will likely be rewritten again.
Source: theregister.com 'Largest-ever' cloud DDoS attack pummels Azure
Background
Aisuru is a Mirai-derived, Turbo‑Mirai–class Internet‑of‑Things (IoT) botnet that first surfaced in mid‑2024 and has since escalated into a recurring generator of hyper‑volumetric attacks. Over 2025 the botnet’s operators repeatedly demonstrated growing capacity, with multiple firms and independent investigators reporting attacks that range into double‑digit terabits per second and multiple billions of packets per second. The October 24 incident targeted a single public IP hosted in Australia and — according to Microsoft — used more than 500,000 unique source IP addresses to deliver high‑rate UDP floods with randomized source ports and minimal source spoofing. Azure’s automated mitigation systems intercepted and filtered the traffic without any reported interruption to protected customer workloads.This episode arrived amid a broader surge in hyper‑volumetric DDoS activity: industry telemetry shows a sustained rise in extremely large L3/L4 floods and an expanding ecosystem of DDoS‑for‑hire offerings that monetize botnet capacity. Cloud operators, DDoS defenders, and network providers are confronting an arms race where attacker capacity grows as access speeds and device power increase.
What happened: anatomy of the Azure event
Attack profile (what Microsoft reported)
- Peak throughput: 15.72 Tbps.
- Peak packet rate: ~3.64 billion pps.
- Date and target: a single public IP address in Australia on October 24.
- Source population: >500,000 unique source IPs from multiple regions.
- Primary vector: UDP floods (high‑speed, randomized ephemeral source ports).
- Spoofing: Microsoft observed little source spoofing, which aligns with IoT‑based botnets that send traffic from real infected devices rather than spoofed addresses.
- Mitigation: Azure DDoS Protection automatically detected and mitigated the traffic at the edge of Azure’s network; Microsoft reports no customer workloads experienced service interruptions.
Why the numbers matter
Packet rate (pps) and bandwidth (bps) tell different stories. High bps attacks (Tbps) can saturate links; high pps attacks (billions of pps) stress packet processing (line cards and control planes). An attacker that balances both dimensions — large packet volumes and high throughput — can overwhelm both transit links and router/switch capacity. The October 24 flood combined enormous aggregate bandwidth with astronomical packet rates, forcing mitigation to address link exhaustion and device processing limits simultaneously.Attack techniques and signatures
Aisuru‑style floods tend to:- Use direct‑path UDP and TCP floods, often with medium‑sized packets (commonly observed in the 540–750 byte range) to optimize both bps and pps.
- Randomize source ports and destination ports to complicate simple port‑based filtering.
- Avoid spoofing when leveraging compromised consumer CPE (customer premises equipment), which actually helps defenders trace traffic to originating networks — but does nothing to reduce immediate volume.
- In some variants, provide residential proxy capabilities that enable reflection of HTTPS traffic through hijacked endpoints.
Who corroborated the facts — and what to watch for
Microsoft published the operational account and specific metrics for the October 24 event; independent DDoS researchers and network security vendors have documented Aisuru‑related attacks throughout 2025 that show a clear trend of escalating scale and sophistication. Observers reported earlier spikes — including multi‑Tbps floods against high‑profile targets in mid‑2025 — and security vendors have logged separate incidents and measurement spikes that exceed 20 Tbps and multiple billions of pps in short bursts. Those prior reports provide independent context that Aisuru has continually evolved, growing both its infected device population and its attack techniques.Caveat: very large DDoS measurement figures can vary by methodology and telemetry point. Short, high‑intensity test bursts targeted at measurement collections can inflate peak numbers briefly; different observers will report different aggregates depending on vantage point, peering, and whether they measure ingress at the target or egress at intermediate networks. Where possible, multiple independent measurements should be combined to establish a reliable picture.
Aisuru in context: the botnet that keeps breaking records
Origins and capabilities
Aisuru began attracting attention in 2024. Built from compromised home routers, DVRs, IP cameras and other poorly secured IoT devices, the botnet draws power from the sheer scale of consumer CPE still deployed with default credentials, unpatched firmware, or exposed management interfaces. Over 2025, researchers observed:- Rapid population growth measured in the hundreds of thousands of devices.
- Use of new exploits and infection vectors to harvest fresh nodes.
- Diversification of services: beyond DDoS it has been repurposed in places as a residential proxy network and potentially for other criminal services.
- Operational policies from the operators: claims of avoiding governmental and military targets have been reported — but such self‑imposed restrictions are not guarantees and should be viewed skeptically.
Notable precedents in 2025
- May 2025: a multi‑terabit attack reached roughly 6.3 Tbps against a high‑profile site, notable for very high pps.
- Late 2025: public reporting and vendor telemetry showed attacks exceeding 20 Tbps in multiple spikes, including very short demonstration blasts that tested mitigation limits.
How Azure mitigated the flood, at scale
Microsoft attributes the successful outcome to the Azure DDoS Protection infrastructure: an automated, globally distributed mitigation fabric that detects anomalies, absorbs large scale traffic, and applies scrubbing and filtering rules in real time, keeping legitimate traffic flowing to protected workloads.Key defensive measures employed in cloud mitigation pipelines typically include:
- Global anycast front doors and massive backbone capacity to absorb volumetric traffic.
- Real‑time traffic classification and automated scrubbing to drop clearly malicious flows.
- Rate‑based thresholds and adaptive filtering per endpoint to defend against pps attacks.
- Redirection of attack traffic into scrubbing centers that perform deep packet inspection at scale.
- Coordination with transit providers and upstream peers to manage or push mitigation closer to the source when feasible.
The broader implications for cloud security
1) Cloud scale does not make cloud services invulnerable — it simply raises the bar
Hyperscalers maintain vast networks and mitigation capacity, but the same scale attracts proportional attacker focus. As defenders scale up, attackers scale with the internet — faster home links, beefier CPE, and botnet growth push the baseline higher. These dynamics create an arms race where mitigation economics and provider coordination become strategic imperatives.2) Packet‑rate attacks threaten router and switch hardware, not just link capacity
Even when aggregate bandwidth can be absorbed, very high pps can overwhelm forwarding planes and control planes of core routers and peering devices. Defenders must harden both bandwidth and packet‑processing capacity, which means investing in high‑performance scrubbing, line‑card protection, and intelligent upstream policing.3) Consumer IoT and ISP networks are the persistent weak link
Large botnets like Aisuru prosper because tens of millions of consumer devices remain poorly secured. When botnet nodes sit on subscriber links, the resulting outbound traffic can degrade the ISP’s own customers and infrastructure. That creates pressure on ISPs to implement better egress filtering, automated detection, and device remediation programs.4) Measurement and attribution remain messy
Huge peak numbers make headlines, but technical and legal responses require accurate measurement and solid evidence. Short burst tests, measurement vantage differences, and incomplete telemetry complicate both operational response and law enforcement investigations.Responsibilities and actions: who needs to do what
Cloud providers
- Continue investing in automated detection, global scrubbing, and mitigation automation.
- Share anonymized telemetry and technical indicators to help the wider ecosystem (ISPs, CERTs, vendors) respond quickly.
- Offer customers turn‑key protections (managed DDoS protection tiers, WAF, CDN fronting) and make them easier to enable.
ISPs and access providers
- Enforce egress filtering (BCP38/BCP84 where applicable) and rate‑limiting at access points.
- Monitor subscriber CPE for signs of compromise and build automated quarantine/remediation workflows.
- Collaborate with cloud providers and downstream services to apply upstream suppression when subscribers create outbound floods.
IoT device manufacturers
- Ship devices with secure defaults (no default credentials, disabled remote mgmt unless explicit opt‑in).
- Provide timely firmware update mechanisms and sign firmware images.
- Improve device lifecycle management and transparency about supported update windows.
Enterprises and cloud tenants (practical checklist)
- Enable provider‑offered DDoS protection (for Azure customers that means Azure DDoS Protection Standard or equivalent) on internet‑facing endpoints.
- Use fronting services (CDN, WAF, Application Gateway) to reduce direct public exposure of origin IPs.
- Implement network micro‑segmentation and strong access controls for management interfaces.
- Monitor egress and ingress telemetry; set alerts for unusual pps/bps patterns and for spikes in DNS or ancillary traffic.
- Test mitigation plans with realistic tabletop exercises and scaled synthetic attacks where feasible.
Rapid response playbook for admins (immediate actions)
- Confirm that DDoS protection services are enabled and in the correct tier for business‑critical endpoints.
- If not using a provider scrubbing service, prepare emergency traffic rerouting plans and contacts for transit providers.
- Harden DNS and public exposure: use cloud‑managed DNS with rate‑limiting features and consider hiding origin IPs behind CDN/fronting layers.
- Audit IoT and remote management devices on corporate networks; remove or isolate any consumer‑grade CPE used for lab/test purposes.
- Conduct a tabletop scenario to validate escalation, communications, and telemetry handoffs in a DDoS event.
The collateral problem: ranking manipulation and DNS abuse
Aisuru operators demonstrated creative misuse beyond pure disruption: the botnet’s control domains and DNS query patterns distorted public DNS rankings on major resolver analytics. Flooding or querying a public DNS resolver in huge volumes can manipulate popularity lists and create brand confusion, prompting vendors to redact or change ranking algorithms.This is a reminder that botnet abuse can be multi‑faceted: attackers can weaponize telemetry aggregation and marketplace features (DNS rankings, domain popularity lists) to sow confusion, gain leverage, or camouflage malicious infrastructure.
Legal, investigative and policy angles
- DDoS‑for‑hire marketplaces complicate attribution and enforcement. Operators move quickly, use transient infrastructure, and sometimes run “rules” to avoid sensitive targets.
- Cross‑border law enforcement coordination is essential but slow relative to attack lifecycles.
- Policy levers exist: better regulatory incentives for secure IoT, minimum security standards for devices offered to consumers, and stronger obligations on ISPs for egress filtering and device remediation.
- Public‑private collaboration (cloud providers, ISPs, CERTs, vendors) is the fastest way to identify and remediate large botnet outbreaks.
Why this will keep happening — and what would meaningfully slow it down
Attackers are scaling with the internet itself. Two structural trends guarantee more such incidents unless addressed:- Faster last‑mile access (fiber and high‑speed broadband) increases the per‑node capacity available to botnets.
- The enormous baseline of insecure IoT devices provides a renewable supply of bots.
- Technical measures (egress filtering, network telemetry, scrubbing capacity).
- Market changes (secure‑by‑default IoT, firmware delivery obligations).
- Policy incentives (certification, liability frameworks, consumer awareness).
- Operational cooperation (rapid takedown, indicator sharing, and upstream suppression).
Notable strengths and risks from the October 24 event
Strengths demonstrated
- Automated detection and response at cloud scale: The mitigation executed without manual intervention, preventing visible customer impact.
- Global scrubbing networks and investment: Hyperscaler capacity and anycast fronting remain powerful defenses against volumetric floods.
- Traceability where devices do not spoof: Because the attack used real infected devices (not spoofed IPs), network operators can more readily identify contributing networks and remediate.
Risks exposed
- The escalating baseline: As fiber and faster consumer links proliferate, attackers gain more per‑device throughput.
- Collateral damage to ISPs and other networks: Outbound floods from subscriber devices can degrade network infrastructure for the provider’s legitimate customers.
- Evolving attacker business models: Botnets becoming residential proxies or multi‑service platforms means attackers can monetize compromise beyond just DDoS, reinforcing persistence and growth.
- Measurement ambiguity: Different measurement methodologies can produce wildly different peak numbers, which complicates operational response and public understanding.
Practical recommendations for WindowsForum readers and IT teams
- Enable and test cloud provider DDoS protection options now — do not wait for an incident.
- Use fronting (CDN, application delivery) and avoid exposing raw origin IPs where possible.
- Harden DNS — use resolvers that support rate limiting, and monitor for unusual query patterns.
- Patch and segregate IoT/CPE devices: keep management networks separate from production networks and remove any consumer‑grade gear from enterprise environments.
- For small orgs or hobbyists: change default credentials, disable remote management, and keep firmware updated on routers, cameras, and DVRs.
- For admin teams: exercise incident response plans that include communications with cloud providers and upstream ISPs; maintain escalation contacts and understand how to request upstream blocking or scrubbing during an event.
Conclusion
The October 24 Azure event is a watershed moment in cloud‑era DDoS: a single, automated mitigation operation absorbed a flood measured in tens of terabits and billions of packets per second, protecting customer workloads from visible impact. That is a technical triumph for cloud engineering and automated defense. But it is also a stark signal: attacker capacity continues to climb alongside improvements in consumer broadband and the persistent insecurity of IoT devices.Defenders — cloud providers, ISPs, enterprises, manufacturers and policymakers — must treat the Aisuru phenomenon as a systems problem, not only a network problem. That requires coordination, secure default design, upstream filtering, robust automation, and ongoing readiness testing. Until those structural gaps are closed, the game will remain one of scaling defenses to meet scaling attacks — and the record books will likely be rewritten again.
Source: theregister.com 'Largest-ever' cloud DDoS attack pummels Azure