Amazon Web Services is still the cloud market leader, but the landscape that made AWS dominant is shifting fast — Microsoft Azure and Google Cloud are accelerating, specialised "neoclouds" are carving out lucrative AI niches, and worldwide infrastructure spend is ballooning at a pace that is changing the rules of competition.
Since launching in 2006, Amazon Web Services (AWS) built and held a commanding lead in public cloud infrastructure by delivering broad global coverage, a vast product catalog, and relentless operational scale. For more than a decade that combination translated into an expanding revenue base and a durable competitive advantage, especially for general-purpose compute, storage, and enterprise web workloads.
That dynamic is evolving. The global cloud infrastructure market is expanding rapidly — driven now more heavily by AI workloads and GPU-hungry applications — and the winners are no longer decided by breadth alone. The market is re-aligning around a smaller set of hyperscalers plus a fast-growing group of specialised providers that optimise specifically for AI, GPU-as-a-Service, and high-density training clusters. Recent industry analyses show the "Big Three" hyperscalers — AWS, Microsoft Azure, and Google Cloud — now command roughly two-thirds of global infrastructure spend even as the overall pie grows dramatically.
This structural change has two consequences:
Recent company moves and market events demonstrate strategic sharpening:
Key characteristics of neoclouds:
Actionable steps:
For Windows-focused IT organisations, the sensible strategy is not to choose a winner today but to architect for flexibility: map workloads to the right economic and technical environment, push for contract terms that protect GPU allocations, and invest in portability and MLOps practices that reduce migration friction. That balanced approach protects enterprises from vendor instability and supplier concentration risks while allowing them to capture cost and performance advantages as the market continues to evolve.
Source: Computing UK https://www.computing.co.uk/news/2025/cloud/aws-feels-the-heat/
Background
Since launching in 2006, Amazon Web Services (AWS) built and held a commanding lead in public cloud infrastructure by delivering broad global coverage, a vast product catalog, and relentless operational scale. For more than a decade that combination translated into an expanding revenue base and a durable competitive advantage, especially for general-purpose compute, storage, and enterprise web workloads.That dynamic is evolving. The global cloud infrastructure market is expanding rapidly — driven now more heavily by AI workloads and GPU-hungry applications — and the winners are no longer decided by breadth alone. The market is re-aligning around a smaller set of hyperscalers plus a fast-growing group of specialised providers that optimise specifically for AI, GPU-as-a-Service, and high-density training clusters. Recent industry analyses show the "Big Three" hyperscalers — AWS, Microsoft Azure, and Google Cloud — now command roughly two-thirds of global infrastructure spend even as the overall pie grows dramatically.
Market snapshot: Q3 2025 by the numbers
The most immediately important stat for framing the debate is total market size and concentration.- The cloud infrastructure market reached about $107 billion in Q3 2025, reflecting a sequential jump and year‑over‑year growth of roughly 28 percent. This quarter marked one of the strongest sequential and annual expansion rates in recent years.
- The top three providers — AWS (29%), Microsoft Azure (20%), and Google Cloud (13%) — together accounted for approximately 63% of worldwide infrastructure spending in Q3 2025. That concentration is higher than in the immediate past, meaning the market is growing but is doing so with the hyperscalers capturing an increasing share of the gains.
- AWS’s market share, while still leading, has shown a long‑term plateau and marginal erosion since peaking in 2022. Analysts describe AWS’s share as “just under 30%” on average across several recent quarters.
Why the shift is happening: AI, GPUs and demand for specialised capacity
AI is the accelerant
Generative AI and large language model (LLM) workloads require vastly more GPU capacity than conventional cloud applications. Training a large model is orders of magnitude more compute‑intensive, and even inference at scale drives sustained GPU utilization and network bandwidth that traditional cloud providers did not architect for as their primary revenue driver.This structural change has two consequences:
- Hyperscalers with deep pockets (and large installed-data-center footprints) can and have committed enormous capital to secure GPU capacity, custom silicon, and high-speed networking. That investment protects them, but it also opens opportunities for smaller players who specialise in GPU-native environments.
- Companies with specialised GPU-focused fleets — sometimes called neoclouds — can undercut or out-innovate hyperscalers on price/performance for AI workloads, particularly when customers require raw GPU density or unique configurations (e.g., NVL72/Blackwell-class GPUs). That is driving rapid growth for providers like CoreWeave, Lambda, and others.
Neoclouds: niche specialization at scale
A new cohort of providers — often capitalising on available GPU inventory, favourable data‑centre economics, and vertical product focus — are attracting AI workloads that historically would have defaulted to the hyperscalers. These providers market themselves on:- GPU density and flexibility (custom instance types, early access to new GPU generations)
- Transparent pricing for long-running training jobs
- Developer-friendly integrations with ML tooling and MLOps platforms
How each major player is reacting
AWS: scale, custom silicon, and strategic partnerships
AWS retains the broadest set of platform services and an unmatched global footprint. Its advantage still lies in the diversity of services — from basic compute and storage to advanced managed AI services and device-to-cloud solutions — that enterprises need to modernise.Recent company moves and market events demonstrate strategic sharpening:
- AWS has accelerated capital expenditures to add AI-focused capacity and to deliver higher-density GPU instances. That investment is part of the firm's effort to ensure it remains the default choice for cloud-first and AI workloads. Market commentary notes that AWS invested heavily in AI infrastructure and that this has been a driver behind stronger quarterly results.
- Critically, AWS secured a major engagement in late 2025 with OpenAI, a multi-year cloud services agreement that underscores AWS’s competitiveness in the AI infrastructure market. That deal signals customer confidence in AWS’s ability to supply large-scale GPU resources and manage extreme workloads. The agreement also highlights the strategic importance of hyperscalers to AI developers who need predictable, highly‑scalable compute.
Microsoft Azure: enterprise land grab and hybrid leadership
Microsoft has leveraged its enterprise footprint — Office 365, Windows Server, SQL Server and Dynamics — to drive Azure adoption. Azure’s growth benefits from deep enterprise relationships and product integrations that make it the default cloud for many Windows-centric organisations.- Microsoft focuses on integrated AI services (Azure OpenAI Service, Copilot integrations) and hybrid solutions such as Azure Arc, making it attractive for enterprises that require a mix of on‑premises control and cloud agility.
- Azure’s growth rate has outpaced AWS in many recent quarters on a percentage basis, although that was from a smaller revenue base; the company continues to push both capex for AI infrastructure and commercial offers to lock in enterprise workloads.
Google Cloud: data, models and software engineering DNA
Google Cloud is leveraging strengths in data analytics, machine learning tooling (TensorFlow, Vertex AI), and networking to win net-new cloud customers. Google has been investing to close gaps in enterprise go-to-market and to target AI-native workloads where its software and model expertise provide differentiation.- Google Cloud’s faster growth rate reflects both market momentum and concerted commercial efforts, particularly around AI platform services that appeal to data science teams and ML engineering organisations.
The rise of specialized providers and what “neoclouds” mean for enterprise buyers
Neoclouds are reshaping procurement for AI infrastructure. Their appeal is simple: purpose-built GPU infrastructure delivered with fewer legacy compromises. For many training workloads, they offer compelling economics and performance.Key characteristics of neoclouds:
- GPU-first hardware economics, often securing preferred allocations of NVIDIA GPUs and custom server builds.
- Developer-centric tooling, integrating at the MLOps layer to reduce friction from model experimentation to production.
- Flexible commercial terms, including spot or reserved GPU capacity designed for long-running training jobs.
- Many neoclouds run capital-intensive businesses that require constant access to GPUs and favorable pricing from suppliers; if GPU supply or pricing changes materially, their cost model is vulnerable.
- Customers must consider vendor stability, SLAs, geographic footprint, compliance and data residency controls — areas where hyperscalers still hold decisive advantages.
- Integration and portability remain challenges. While neoclouds often support standard ML frameworks, migrating a production workload between providers can reveal differences in networking, storage semantics, and support models.
Regional dynamics and regulation
Cloud growth is global but uneven. The United States remains the largest market and the engine of absolute demand growth, but pockets of rapid expansion — India, Australia, Mexico, Ireland, South Africa — are significant. These geographic differences matter because:- Compliance and data-residency requirements push some customers to pick providers with specific local availability zones and regulatory capabilities.
- National security and competition authorities are increasingly scrutinising hyperscaler dominance. For example, regulatory bodies in the UK have publicly documented high concentration in the IaaS market, observing that AWS and Azure account for substantial shares of IaaS spending in the region. These findings influence procurement, investment, and even market entry strategies.
Practical guidance for IT leaders and Windows-centric organisations
Enterprises running Windows Server, Active Directory, SQL Server, or Microsoft 365 workloads face specific considerations in this shifting market. The best approach balances current operational realities with future flexibility.Actionable steps:
- Map workloads to cloud economics
- Classify apps by sensitivity, performance profile, and GPU needs. Use a matrix to determine which workloads require hyperscaler SLAs, which can benefit from neocloud economics, and which should remain on-premises for latency or compliance reasons.
- Adopt a workload portability strategy
- Containerise or package services where possible. Use infrastructure-as-code (IaC) and standard orchestration (Kubernetes) to minimise provider lock-in and simplify migration between hyperscalers and specialised GPU providers.
- Negotiate for GPU guarantees
- If AI training is strategic, include commitments for GPU allocation and circuit-level networking in procurement contracts. Multi‑provider commitments can be used to secure redundancy and pricing leverage.
- Embrace hybrid and edge integration
- Technologies like Azure Arc and other multi-cloud management tools help extend governance and control across environments. For Windows-heavy shops, hybrid options retain compatibility while benefiting from cloud innovation.
- Evaluate total cost of ownership (TCO) for AI workloads
- Look beyond hourly instance costs: include data egress, storage I/O, long-term model hosting, and managed service fees. Training on a cheaper GPU provider may still incur higher orchestration and integration costs.
- Build MLops and observability practice
- Invest in tooling that tracks model training costs, GPU utilization, and reproducibility. This reduces waste and protects against runaway cost growth as model experimentation scales.
Strategic risks and unanswered questions
While the market shows clear winners and opportunities, several risks deserve attention.- Supply-chain concentration for GPUs: NVIDIA currently dominates modern training GPUs. Any disruption — manufacturing bottlenecks, export controls, or supplier pricing changes — would disproportionately affect neoclouds and hyperscalers alike. That concentration elevates strategic risk across the ecosystem.
- Capital intensity and viability of neoclouds: Rapid growth requires heavy upfront capital to buy GPUs and data-centre capacity. If market demand normalises or GPU prices spike, margins could compress and valuations could correct. Early investors and customers should watch cash-flow profiles and customer concentration carefully.
- Regulatory intervention and anti‑trust scrutiny: Authorities in multiple jurisdictions are already monitoring the high concentration of infrastructure spend. Potential remedies, mandated data portability, or new compliance requirements could alter the economics of scale that benefit large hyperscalers.
- Vendor lock-in via AI services: As hyperscalers wrap models and AI tooling tightly into proprietary managed services (e.g., proprietary model hosting, proprietary prompt management), organisations may face re‑entry costs if they later wish to migrate. Designing with standard formats and open tooling mitigates this risk.
Short-term outlook (12–24 months)
- Sustained high growth: The cloud market will remain in a high-growth phase driven by AI-led demand and the operationalisation of models. Expect continued double-digit to high‑20s percentage growth year‑over‑year for cloud infrastructure in the near term.
- Hyperscalers keep share, but not trivially: AWS will remain the largest single provider, but Microsoft and Google will likely continue to outpace it on percentage growth in many quarters due to their enterprise leverage and AI service momentum. The combined presence of the Big Three will likely absorb an outsized portion of new spend.
- Neoclouds grow selectively: The most successful neoclouds will be those that secure stable GPU supply, diversify customer bases beyond a few anchor clients, and offer strong developer integrations that simplify productionisation of models. Others may struggle with the capital intensity and razor‑thin margins inherent in infrastructure provisioning.
Long-term scenarios (3–5 years)
- Consolidation and hybrid equilibrium
- The market consolidates: a handful of hyperscalers maintain dominance for broad enterprise workloads while specialist providers either get acquired or stabilise as profitable niche players. Hybrid deployments become the default design pattern for critical applications.
- Open AI infrastructure stack emerges
- If open standards and portable model formats gain traction — together with improvements in on‑premises accelerators — enterprises might reclaim more workload footprint, and procurement could shift toward a mixed model of on‑prem and cloud-native deployment.
- Regulatory reshaping
- Competition authorities impose remedies or stricter data portability rules that reshape contract negotiation dynamics and reduce lock‑in. This would help smaller providers but also raise compliance costs and complexity for everyone.
What IT teams should do this quarter
- Conduct an immediate audit of AI projects: determine cost drivers, GPU usage patterns, and time-to-production metrics.
- Identify critical dependencies: highlight single‑vendor chokepoints and create contingency plans in case a provider’s availability or pricing changes.
- Pilot multi-provider workflows: run one or two non-critical AI training jobs on a specialised GPU provider to quantify real-world cost and operational differences.
- Revisit contractual terms: ask for GPU capacity guarantees, predictable pricing for long-running jobs, and clearer SLAs for AI-specific services.
Conclusion
The cloud market in 2025 is defined by rapid expansion and a realignment driven by AI. AWS remains the market leader, but it now faces an environment where growth rate, specialised hardware access, partner ecosystem depth, and enterprise integration matter in different ways than they did five years ago. Hyperscalers will continue to dominate the bulk of new infrastructure spend, but the rise of neoclouds and AI‑specialist providers has introduced meaningful competition for high‑value, GPU‑intensive workloads.For Windows-focused IT organisations, the sensible strategy is not to choose a winner today but to architect for flexibility: map workloads to the right economic and technical environment, push for contract terms that protect GPU allocations, and invest in portability and MLOps practices that reduce migration friction. That balanced approach protects enterprises from vendor instability and supplier concentration risks while allowing them to capture cost and performance advantages as the market continues to evolve.
Source: Computing UK https://www.computing.co.uk/news/2025/cloud/aws-feels-the-heat/

