EU Considers DMA Rules for AWS Azure Google Cloud After Major Outages

  • Thread Author
The European Commission is preparing a formal look into whether Amazon Web Services, Microsoft Azure and Google Cloud should be brought within the scope of the Digital Markets Act after a run of high‑impact outages exposed both the systemic importance of hyperscale clouds and the practical limits of voluntary market discipline.

EU cloud interoperability: AWS and Azure clouds connected for multi-cloud interop against vendor lock-in.Background / Overview​

In October 2025 two separate, large control‑plane incidents at the biggest public cloud providers produced cascading outages that touched consumer apps, enterprise services and public‑sector systems around the world. On October 20 an AWS failure in the US‑EAST‑1 region—centred on DNS and DynamoDB endpoint resolution—knocked down hundreds of services and produced a long recovery tail. On October 29 a configuration error in Microsoft’s Azure Front Door edge fabric disrupted Microsoft‑hosted services and many third‑party sites, forcing the Scottish Parliament to suspend voting and grounding parts of some airlines’ online check‑in flows. A June 2025 Google Cloud incident previously took down major platforms including Spotify and Discord. Why this matters: these outages made plain that the modern internet relies on a relatively small set of cloud primitives—DNS, global ingress fabrics, identity and managed databases—provided by a handful of hyperscalers. When those primitives fail, dozens or hundreds of downstream services can be impacted almost immediately. Independent market trackers place the top three providers at roughly two‑thirds of global public cloud spending, a concentration that regulators and policymakers have watched closely for several years.

What the Commission is said to be investigating​

The reported scope​

According to industry reporting relayed in the European tech press, EU officials are preparing an inquiry into whether the cloud units of Amazon, Microsoft and Google meet the criteria for obligations under the Digital Markets Act (DMA) or otherwise require tailored remedies. The focus is expected to examine whether these providers:
  • exert gatekeeper‑style market power in core cloud services,
  • use bundling or preferential treatment to favour first‑party products,
  • impose switching friction through high egress costs and proprietary interfaces, and
  • restrict technical interoperability and data portability for customers.
Those are precisely the sorts of conduct the DMA was designed to curb where a service meets the gatekeeper thresholds, and regulators are now weighing whether cloud computing—already central to AI and digital services—should face the Act’s constraints. Independent UK work and public submissions have argued for applying DMA‑style obligations to cloud and AI as part of wider EU policy discussions.

Verifiability note​

A number of outlets have reported that Bloomberg sources first signalled the Commission’s thinking. The original Bloomberg piece could not be located in public wire archives at the time of writing; the claim remains plausible and has been repeated in secondary reporting, but the precise Bloomberg article referenced by some summaries could not be independently verified. Where reporting relies on anonymous sources or single briefings, that uncertainty should be flagged. Regulatory examinations do move quickly, but formal Commission statements or public case openings are the authoritative records. Treat the Bloomberg‑attributed line as reported but not yet independently confirmed by a Commission notice.

The outage chronology and real‑world impacts​

AWS — October 20 (US‑EAST‑1): DNS + DynamoDB​

An AWS region incident began in the early hours of October 20 when DNS resolution for the DynamoDB API endpoints in US‑EAST‑1 failed, producing widespread errors across managed services that rely on that control‑plane primitive. The immediate effect was that orchestration, launch and authentication workflows experienced high error rates; as automated recovery and health checks ran, inconsistencies in internal state prolonged the outage and created a long tail of degraded service for many customers. High‑profile consumer platforms and brands reported prolonged outages or degraded functionality—reports indicated some services experienced interruptions lasting many hours. Public trackers and vendor status pages showed large spikes in incident reports. Key takeaways from AWS’s incident:
  • The proximate trigger was a DNS/control‑plane failure; the cascade came from tight coupling between control primitives and downstream workloads.
  • Even after core DNS answers returned, backlog processing and state reconciliation produced residual customer impact that stretched far beyond the initial mitigation window.

Azure — October 29: Azure Front Door configuration error​

Less than ten days later a global configuration change to Azure Front Door—Microsoft’s global edge and application‑delivery fabric—introduced an invalid state that prevented many edge nodes from loading correctly. Because AFD fronts identity issuance, TLS termination and routing for many services, the misconfiguration manifested as authentication failures, blank admin consoles and 502/504 errors for numerous tenants. The Scottish Parliament suspended voting after the chamber’s electronic voting system failed; Alaska Airlines and other carriers reported customer‑facing check‑in and website problems that forced manual fallbacks at airports. Microsoft halted the rollout, rolled back the change, and manually recovered nodes while routing traffic to healthy nodes. Recovery took several hours with intermittent residual effects as caches and global routing converged.

Google Cloud — June 12: quota/service control automation​

A June 2025 Google Cloud incident originated from an invalid automated quota update in Google’s Service Control system that caused external API requests to be rejected globally with 503 errors. The outage affected Google Workspace components and major third‑party workloads such as Spotify and Discord, with Downdetector spikes and widely visible user impact. Google subsequently issued an incident report and apology and restored service. The technical pattern repeats: automated controls, global systems and a single invalid change produced far‑reaching effects.

Why regulators are circling: market structure, sovereignty and criticality​

Three converging policy concerns explain regulatory attention to the cloud troika.
  • Market concentration and lock‑in. Industry trackers and competition authorities repeatedly note that AWS, Microsoft Azure and Google Cloud capture the lion’s share of public cloud spending. Synergy and other analysts place their combined share in the range of roughly 60–70% of global IaaS/PaaS spend—numbers that make switching expensive and politically salient. Regulators worry that proprietary primitives, egress charges and complex licensing create durable switching costs.
  • Operational criticality. Cloud platforms now host banking back‑ends, airline operations, health systems and parliamentary voting systems. When a control‑plane primitive fails, the consequences reach beyond consumer inconvenience into public‑sector functioning and economic loss. That real‑world criticality invites treatment more like telecoms or utilities in the eyes of some policymakers.
  • Digital sovereignty concerns. European policymakers have long worried about dependence on non‑EU providers and the legal framework around extraterritorial data access (for example, the U.S. CLOUD Act). Recent outages have amplified those anxieties and fed calls for either stronger regulatory constraints or the development of a European sovereign cloud stack. The Commission’s AI and Cloud agenda and member‑state consultations reflect that shift.

What the DMA can — and cannot — do for cloud​

The DMA creates a toolbox of conduct obligations for so‑called “gatekeepers” (large platforms that meet thresholds for turnover, user reach and business user counts). If cloud services are designated as gatekeeper core platform services or if specific cloud firms are designated, possible Commission actions include:
  • forcing greater interoperability or standard APIs for critical services,
  • requiring data portability guarantees with audited tools,
  • banning certain tying/bundling practices that favour first‑party services,
  • mandating non‑discriminatory access to aggregated or non‑aggregated data generated in the course of cloud use.
These are potent remedies for reducing lock‑in, but they have limits. The DMA’s thresholds were designed for consumer‑facing platforms and use metrics like monthly active end users—measures that do not map cleanly onto enterprise cloud contracts and infrastructure. That is one reason cloud services have, until now, been hard to bring within the DMA’s usual gatekeeper frame. The Commission’s current thinking appears to be about whether and how to adapt DMA tools—through an investigation or a review—to cloud’s enterprise realities.

Balanced assessment: strengths, blind spots and unintended consequences​

Strengths of the hyperscalers (why customers choose them)​

  • Economies of scale and feature breadth. Hyperscalers deliver global reach, specialist managed services and continuous innovation (especially in AI and data services) that few regional players can match.
  • Operational expertise and certifications. Compliance frameworks, security operations centres and resilient networking are expensive to build; many customers rely on hyperscalers precisely because they provide these as managed capabilities.
  • Ecosystem effects. Rich partner networks, marketplaces and developer tooling accelerate product development and lower time‑to‑market for enterprises and startups.
These benefits explain widespread adoption and are the economic counterweight to regulatory arguments for intervention.

Structural and operational risks (what the outages exposed)​

  • Control‑plane fragility. DNS and global edge fabrics are single‑points of failure when so many services rely on them; misconfigurations or automation defects can cascade rapidly.
  • Lock‑in via proprietary primitives and egress economics. Proprietary managed services (e.g., DynamoDB, specialized networking features) and data‑transfer pricing create real switching costs that can be used strategically or simply make migration prohibitively expensive.
  • Opacity and limited customer telemetry. Customers often lack inside visibility on routing, DNS and control‑plane state that would allow faster failovers or informed procurement decisions during incidents.

Regulatory trade‑offs and risks​

  • Overreach vs. underreach. Heavy obligations could force hyperscalers to unwind practical efficiencies, slow innovation or create compliance costs that ultimately raise prices or push investment elsewhere. Conversely, too little action risks locking in dependence and increasing systemic fragility.
  • Fragmentation risk. Mandating data‑localization or awkward technical interfaces in the name of sovereignty can fragment operational security models and harm resilience that relies on global threat intelligence sharing.
  • Implementation complexity. The DMA was not originally calibrated for enterprise cloud. Translating consumer‑oriented gatekeeper rules into meaningful, enforceable obligations for cloud primitives is challenging and will invite technical pushback and long legal fights.

What regulators might realistically do next​

  • Launch a formal inquiry to determine whether specific cloud services meet DMA designation criteria in practice (a process that gathers evidence on market share, switching costs and impacts on business users).
  • If designation is possible, consider a tailored obligations package that emphasizes:
  • technical interoperability and open standards for control‑plane APIs,
  • audited data portability tooling with measurable performance indicators,
  • constraints on bundling that create untenable switching costs.
  • Alternatively or in parallel, use competition law and sector‑specific instruments (procurement rules, critical‑infrastructure designation, incident reporting mandates) to drive transparency and resilience without full DMA gatekeeper obligations.
UK precedent is informative here: the Competition and Markets Authority’s provisional findings recommended further investigation of AWS and Microsoft under UK digital markets legislation; similar arguments and remedies are under debate at EU level. Any Commission action will weigh the costs of intervention against the political imperative to reduce systemic dependency.

Practical steps for enterprises and public bodies (what to do now)​

  • Map dependencies. Identify which application flows rely on single control‑plane primitives (DNS entries, global ingress, single managed databases).
  • Harden DNS and authentication resilience. Add independent DNS health checks, diversify edge providers where feasible, and decouple identity dependencies from a single global fabric.
  • Practice multi‑region and multi‑cloud failovers. Test runbooks for cross‑region failover, and catalogue the business costs that result from different outage classes to inform procurement and SLA negotiations.
  • Demand post‑incident transparency. When renewing contracts, require quantified post‑incident reports, measurable remediation commitments and contractual remedies tied to resilience.
  • Engage policymakers through trade associations. Collective responses can shape sensible regulation that improves contestability without destroying global-scale operational efficiencies.
Operational advice that many SRE teams already embrace—graceful degradation, circuit breakers, idempotent retries and rigorous traffic‑shaping—remains the best immediate protection against upstream failures.

What this could mean for the cloud market​

  • Short term: higher regulatory friction, louder political pressure for sovereign cloud options, and accelerated product changes by hyperscalers (e.g., lower egress costs or new portability tooling) to blunt enforcement risk.
  • Medium term: possible formal remedies (interoperability rules, portability obligations) that reduce some switching friction but also create compliance overheads and potential innovation delays as vendors architect for regulatory constraints.
  • Long term: either the emergence of a stronger European sovereign cloud ecosystem (if public funding and procurement shift decisively) or a continued hyperscaler‑driven market shaped by targeted remedies and technical standards that enable easier multi‑cloud operations.
Analysts and industry groups are split on the desirability and efficacy of compulsory structural or behavioral interventions. The right policy balance must preserve innovation incentives while reducing systemic risk and vendor lock‑in.

Critical appraisal — strengths and blindspots of the Commission’s likely approach​

The case for regulatory action is strong on factual grounds: concentration, the discovery of practical switching costs and multiple, high‑visibility incidents make the policy problem tangible. Regulators have legitimate tools in the DMA and competition law to address anti‑competitive practices that create lock‑in. At the same time, the DMA’s original design targets consumer platforms and its metrics do not neatly translate to enterprise cloud contracts—this makes any adaptation legally and technically delicate.
Potential blindspots:
  • Too much precision required. Effective cloud remedies require deep technical understanding of control‑plane architectures and realistic migration costs; policymakers risk mis‑specifying obligations in ways that create perverse incentives.
  • Underestimating geopolitical supply chains. Forcing hasty fragmentation or localization may degrade cross‑border threat intelligence and operational resilience.
  • Legal and economic friction. Large enforcement actions will draw prolonged litigation that may slow the arrival of practical remedies.
A prudent Commission approach would combine targeted inquiries with stakeholder consultation, technical standards development and calibrated remedies focused on interoperability, portability and incident transparency rather than blunt structural separation. That avoids the trap of either doing nothing or imposing injudicious wholesale remedies.

Final assessment and next steps​

The October and June outages shifted the cloud debate from theoretical concentration concerns to immediate policy risk. Regulators in the UK and EU are already moving: the UK’s CMA has signalled further inquiry actions, and the EU’s policy agenda now explicitly includes cloud and AI considerations. Whether the European Commission will apply the DMA directly to the major cloud providers or pursue tailored measures that borrow the DMA’s spirit—enforcing interoperability, portability and anti‑tying rules—remains to be seen.
For enterprises, the short‑term imperative is operational: map dependencies, test failover plans, and demand stronger contractual transparency from providers. For policymakers, the challenge is harder: craft enforceable, technical remedies that reduce lock‑in and increase resilience without degrading the global efficiencies that hyperscalers enable.
Regulatory filings and public Commission notices will be the authoritative guide to actions that follow. Reporting suggests an inquiry is planned; the underlying primary reporting attributed to Bloomberg is plausible and consistent with public policy trajectories, but the exact Bloomberg dispatch referenced in some summaries could not be located for independent verification at the time of publication. The situation is evolving rapidly and will likely produce formal filings and public consultations in the weeks ahead.

Short checklist for IT leaders and procurement teams​

  • Inventory cloud control‑plane dependencies and document recovery costs for key business flows.
  • Require post‑incident root‑cause reporting and measurable remediation commitments in renewals.
  • Architect for graceful degradation (cache‑first design, offline modes, reduced‑function fallbacks).
  • Test multi‑region and multi‑cloud failovers regularly; simulate DNS and identity failures.
  • Track regulatory developments and update procurement clauses to reflect potential portability and interoperability requirements.
The outages of 2025 are a wake‑up call: cloud scale brings power and fragility in equal measure. The policy response—if it is proportionate, technically informed and targeted—can make the cloud safer and more contestable. If it is blunt or rushed, it risks imposing new costs without materially improving resilience. The next months will determine which path Europe, and perhaps the wider world, chooses.

Source: Techzine Global AWS, Azure, and Google Cloud under scrutiny by EU after series of outages
 

Back
Top