The internet you use every day — from messaging apps and streaming services to online banking and government portals — runs on racks of servers, miles of fibre and a handful of companies that operate vast, power-hungry data centres: the cloud is the invisible engine of the web. Recent reporting and a fresh global outage have thrown that dependence into stark relief, showing how markets, engineering choices and energy systems combine to shape what is and isn’t available online at any moment.
Cloud computing is not a single thing but a set of operational models and commercial contracts that let organisations rent computing power, storage and applications rather than owning them. The most visible form to consumers is Software as a Service (SaaS) — web apps like email, collaboration suites and CRMs that run entirely on provider infrastructure. Behind the scenes sit Infrastructure as a Service (IaaS), where customers lease raw compute and networking capacity, and Platform as a Service (PaaS), which sits between the two by also handling part of the software stack for developers.
Three firms — Amazon Web Services (AWS), Microsoft Azure and Google Cloud — together dominate the global market for cloud infrastructure, often described as the “Big Three” or hyperscalers. Market analysis for the second quarter of 2025 shows AWS at roughly 30% market share, Microsoft at 20% and Google Cloud at 13%, a concentration that explains why outages at one provider ripple across the internet and why governments and businesses keep a close eye on the companies that control so much capacity.
Why the outage was more disruptive than a conventional data centre failure:
Practical consequences for IT teams:
Why scale matters financially:
Environmental and operational implications:
1) Concentration and systemic risk
Centralisation among a few providers means that provider outages have outsized systemic effects. Risk mitigation strategies include multi‑region architectures, cross‑cloud redundancy and staged failover plans — but these add cost and complexity and are not always feasible for smaller firms.
2) Vendor lock‑in and migration complexity
PaaS offerings and managed services accelerate development but often use proprietary APIs and managed databases, increasing migration cost if a change of provider becomes necessary. The practical approach is to separate core portable workloads from those where vendor‑specific acceleration yields decisive business value.
3) Energy and local infrastructure constraints
Large new data centres strain local grids and water systems. Organisations that value sustainability must insist on transparent energy‑sourcing commitments from providers and require carbon and water usage reporting in contracts. Regulatory scrutiny and community opposition can also delay or alter projects; planning must therefore account for permitting risk.
Operational checklist (practical, sequential steps):
A caution: some headline project budget figures (public announcements for mega‑projects or multi‑year capacity investments) are forecasts or commitments that can change with market conditions. Treat multi‑hundred‑billion global commitments with scepticism until capital disbursements and site construction progress are visible. Where figures are uncertain or aspirational, stakeholders should seek contractually enforceable milestones rather than press‑release projections.
Source: France 24 Servers, software and data: how the cloud powers the web
Background
Cloud computing is not a single thing but a set of operational models and commercial contracts that let organisations rent computing power, storage and applications rather than owning them. The most visible form to consumers is Software as a Service (SaaS) — web apps like email, collaboration suites and CRMs that run entirely on provider infrastructure. Behind the scenes sit Infrastructure as a Service (IaaS), where customers lease raw compute and networking capacity, and Platform as a Service (PaaS), which sits between the two by also handling part of the software stack for developers.Three firms — Amazon Web Services (AWS), Microsoft Azure and Google Cloud — together dominate the global market for cloud infrastructure, often described as the “Big Three” or hyperscalers. Market analysis for the second quarter of 2025 shows AWS at roughly 30% market share, Microsoft at 20% and Google Cloud at 13%, a concentration that explains why outages at one provider ripple across the internet and why governments and businesses keep a close eye on the companies that control so much capacity.
The outage that matters: what happened and why it matters
On October 20, 2025, a major AWS disruption left large swathes of the web with degraded or no access for several hours. Millions of users were unable to connect to services ranging from social apps to e‑commerce and smart‑home devices, underscoring a simple fact: when core cloud infrastructure breaks, the user experience breaks too. Early reporting traced the failure to internal network and service‑health subsystems within AWS’s critical US‑East region, producing cascading failures for services relying on DNS, databases and load‑balancing functions. Recovery took many hours during an event that was global in impact.Why the outage was more disruptive than a conventional data centre failure:
- Hyperscalers centralise large volumes of critical services and APIs in a small number of availability regions, increasing blast radius when those regions fail.
- SaaS adoption means the failure of a single upstream service can deny access to thousands of downstream apps.
- The modern stack relies on many managed services (databases, identity, edge routing); if the provider’s control plane or monitoring systems degrade, customers have limited means to work around the issue.
Cloud models and adoption: EU snapshot and business behaviour
Cloud services come in three major models — SaaS, IaaS and PaaS — and adoption patterns differ by company size and use case. In the European Union, roughly four in ten businesses used cloud services in recent surveys, mostly for email, file storage and office or cybersecurity software. Larger firms adopt the cloud at far higher rates than small businesses: nearly eight in ten large firms used cloud services versus fewer than half of small firms in the same dataset. SaaS is by far the most common purchase, while PaaS remains the least adopted. These adoption patterns drive resilience choices: many organisations accept SaaS convenience at the cost of ceding operational control to the provider.Practical consequences for IT teams:
- SaaS reduces local maintenance overhead but concentrates dependencies on vendor SLAs and operational behaviour.
- IaaS and private cloud give more control but transfer the responsibility for patching, scaling and network architecture back to the customer.
- PaaS accelerates development but can lock workloads into vendor APIs and upgrade paths.
Who builds the cloud — and why it’s so expensive
Hyperscalers can finance enormous, multi‑year projects because the prize is the platform economy: recurring revenue from millions of customers, plus strategic control of AI and infrastructure services. Building and equipping modern data centre campuses — often called AI campuses or hyperscale facilities — can run into the hundreds of millions or billions of dollars per site. Recent deals and projects make that plain: Meta announced a $1.5 billion AI data centre project in Texas, while private developers and specialised cloud builders regularly report multi‑hundred‑million to billion‑dollar facilities tailored for GPU‑heavy AI workloads. These costs reflect land, grid upgrades, cooling systems, power infrastructure and specialised IT hardware.Why scale matters financially:
- Economies of scale in procurement (chips, racks, networking) reduce unit cost.
- Owning capacity lets providers monetise AI and cloud offerings competitively and secure supply for long‑term AI training projects.
- The capital intensity creates high barriers to entry — new entrants struggle to match the cost structure and geographic footprint of the established hyperscalers.
The energy and environmental equation
Data centres are heavy electricity consumers, and their growth is reshaping local grids and national energy planning. Global modelling and industry reporting project that data‑centre electricity demand will grow substantially through the end of the decade, driven in large part by AI‑intensive workloads. Estimates from international energy modelling show data centres consumed a few hundred terawatt‑hours in recent years, and those models forecast a significant uptick by 2030 if current trends continue. In the U.S., data centres already account for multiple percentage points of total electricity demand in some studies, and grid operators are planning investments to keep pace with rapid increases in regional data‑centre load.Environmental and operational implications:
- The carbon impact depends heavily on where data centres draw power. Facilities powered by renewables can be low‑carbon on paper, but grid constraints and the timing of demand still create emissions challenges.
- Water use and local ecosystem impacts from large cooling installations are increasingly factored into permitting and community response.
- Utilities and regulators are responding: approvals for new generation, transmission upgrades and sometimes subsidies appear alongside data‑centre planning.
Strengths of the cloud model
The public cloud and its hyperscalers deliver undeniable advantages that explain rapid adoption across sectors:- Scalability: organisations can grow or shrink compute resources quickly without capital investment.
- Time to market: developers can provision managed services and iterate rapidly.
- Cost model: pay‑as‑you‑go pricing turns capital expenditure into operational expenditure for many buyers.
- Global reach: large providers offer regional footprint and content‑delivery networks that improve latency and availability for global audiences.
Risks, tradeoffs and sensible mitigations
The episode of disruption and the market reality outlined above surface three key risks every organisation should evaluate.1) Concentration and systemic risk
Centralisation among a few providers means that provider outages have outsized systemic effects. Risk mitigation strategies include multi‑region architectures, cross‑cloud redundancy and staged failover plans — but these add cost and complexity and are not always feasible for smaller firms.
2) Vendor lock‑in and migration complexity
PaaS offerings and managed services accelerate development but often use proprietary APIs and managed databases, increasing migration cost if a change of provider becomes necessary. The practical approach is to separate core portable workloads from those where vendor‑specific acceleration yields decisive business value.
3) Energy and local infrastructure constraints
Large new data centres strain local grids and water systems. Organisations that value sustainability must insist on transparent energy‑sourcing commitments from providers and require carbon and water usage reporting in contracts. Regulatory scrutiny and community opposition can also delay or alter projects; planning must therefore account for permitting risk.
Operational checklist (practical, sequential steps):
- Identify critical services that cannot tolerate extended provider outages.
- Design multi‑region deployments for those services or keep on‑prem or colocation fallbacks.
- Create a tested incident runbook for provider outages, including DNS, IAM and data restoration steps.
- Negotiate contractual SLAs that include credits and operational support for major incidents.
- Monitor power and sustainability claims and prefer providers with evidenced renewable procurement or on‑site generation where that aligns with policy.
Europe, sovereignty and the rise of alternatives
While the global market is dominated by US hyperscalers, other players — including Chinese firms and regional operators — hold meaningful shares in specific markets. European providers have struggled to keep overall market share as hyperscalers scale rapidly, but regional players continue to compete on data‑sovereignty, local support and regulatory alignment. Adoption patterns in the EU show large enterprises leading cloud uptake while smaller businesses lag behind for a mix of cost, skills and trust reasons. These dynamics shape procurement and public policy debates over digital sovereignty and antitrust scrutiny.What enterprises and IT teams should do now
Short, actionable guidance for IT leaders and Windows administrators who rely on the cloud:- Prioritise resilience for customer‑facing and compliance‑sensitive systems — multi‑region design and automated failover are essential for mission‑critical services.
- Treat cloud providers like utilities: design for graceful degradation, not full continuity — assume some services will be slower or unavailable during incidents.
- Use hybrid and colocation strategically: keep foundational identity, logging and backup services under your control where appropriate.
- Demand transparency and data: include audit rights and energy reporting in procurement; sustainability is a real operational constraint, not just PR.
- Run regular outage drills: rehearse switching to alternate endpoints, rolling back DNS changes and restoring from provider‑agnostic backups.
Bigger picture: cloud, AI and an infrastructure arms race
The cloud is now the platform for artificial intelligence at scale. That drives more capex into specialised facilities with extreme power and cooling needs — and, in turn, accelerates market concentration because only a few players can justify the scale. Private capital, sovereign funds and specialised operators are moving to finance capacity through leasing models and buyouts, which changes the vendor landscape but does not eliminate the concentration of compute access. The net effect: faster innovation, but fewer independent hubs and more systemic interdependence across critical services.A caution: some headline project budget figures (public announcements for mega‑projects or multi‑year capacity investments) are forecasts or commitments that can change with market conditions. Treat multi‑hundred‑billion global commitments with scepticism until capital disbursements and site construction progress are visible. Where figures are uncertain or aspirational, stakeholders should seek contractually enforceable milestones rather than press‑release projections.
Conclusion
The cloud powers the modern web by turning computing into a rented, almost invisible service — and that transformation has huge benefits for speed, scale and developer productivity. But it also concentrates risk, demands unprecedented energy and infrastructure planning, and raises tradeoffs around control, cost and sustainability. The October 2025 outage made that tradeoff visible to the public: convenience and reach come with operating assumptions that can fail dangerously fast. Organisations that want to benefit from cloud scale while remaining resilient must design for failure, demand transparency from providers and treat energy and local infrastructure constraints as central planning factors rather than afterthoughts. The cloud is not just someone else’s servers: it is now a strategic asset of the global economy, and its management will shape both digital services and physical infrastructure for years to come.Source: France 24 Servers, software and data: how the cloud powers the web