Alphabet raises 2025 capex to $91–$93B as AI datacenter race accelerates

  • Thread Author
Alphabet’s spending plans have just rewritten the map of the AI datacenter race: Google’s parent Alphabet said it will push capital expenditures to roughly $91–$93 billion in 2025, with management warning of a significant increase again in 2026 as cloud and AI demand outstrips capacity.

Futuristic data center with rows of servers under a blue cityscape and a neon 91-93B CAPEX graph.Background​

The hyperscale cloud providers have entered a new, capital‑intensive phase. Over the last 18 months the industry shifted from exploratory AI projects to widespread, enterprise deployments that require vast pools of GPUs, high‑bandwidth interconnect, and power‑dense datacenter campuses. That shift has forced the largest vendors to re‑rate infrastructure investment as strategic—rather than discretionary—capital. Alphabet’s recent upward revision of annual capex guidance is the most explicit example so far, following earlier increases this year that took its 2025 expectation from $75 billion in February to $85 billion in July, and now to the current $91–$93 billion range. This is not an Alphabet‑only phenomenon. Microsoft disclosed a near‑quarterly capex run‑rate topping $30 billion in its most recent reporting period, with a single quarter of spending of about $34.9 billion, much of it earmarked for short‑lived compute assets—GPUs and CPUs—needed to keep Azure and first‑party AI solutions online. Oracle has also embarked on a high‑stakes expansion, including large bond offerings and widely reported multi‑year contracts with OpenAI that industry sources have valued in the hundreds of billions.

Overview: What the new numbers mean​

Alphabet’s scale-up: capex and revenue in context​

Alphabet reported a Q3 revenue haul north of $102 billion—its first-ever $100 billion quarter—and attributed a substantial portion of the investment to cloud and AI demand. That revenue beat provides important context: management is increasing capital intensity while still delivering outsized top‑line growth, making the case that increased capex is not simply speculative, but tied to measurable customer demand. Key points:
  • Alphabet’s Q3 revenue was about $102.3–102.35 billion, up ~16% year‑over‑year.
  • Google Cloud grew strongly (mid‑30% range in the quarter), building a contracting backlog measured in the hundreds of billions that management cites as justification for the capital program.
  • Alphabet’s capex guidance was raised to $91–$93 billion for 2025, with management explicitly projecting further increases into 2026.
That magnitude of spending—roughly double Alphabet’s 2024 capex—places Google alongside Amazon, Microsoft, and Meta in what analysts now call a multi‑hundred‑billion dollar wave of infrastructure spending to enable AI at scale. The common denominator is heavy GPU consumption plus dense, power‑rich sites that require long lead times to plan, permit and build.

Microsoft: buying GPUs now, building for long term​

Microsoft’s recent quarter shows a different mix but the same direction: a very large share of spending is on short‑lived assets (servers, GPUs, CPUs) to meet immediate Azure demand, while the rest funds datacenter campuses and networking. CFO commentary emphasized that capex will increase sequentially and that fiscal 2026’s growth rate for capex will be higher than fiscal 2025’s—a tacit admission that supply constraints for AI hardware are still binding and that Microsoft is accelerating purchases to get capacity into service quickly. Microsoft’s reported numbers:
  • Quarterly capex reported around $34.9 billion, well above many analysts’ expectations.
  • Roughly half (or a material share) of the rise is targeted at GPUs/CPUs and other short‑lived compute gear—an unusual capex composition that highlights the “consumption” nature of AI infrastructure today.

Oracle and the OpenAI effect: borrowing to build​

Oracle’s recent bond activity—reported issuances in the mid‑$15–18 billion range and other contemplated offerings—ties directly to its decision to back large AI customers and to scale OCI (Oracle Cloud Infrastructure) for GPU‑heavy workloads. Industry reporting has also surfaced a reported Oracle–OpenAI multi‑year cloud commitment that multiple outlets have valued at around $300 billion over roughly five years, with key observations that this would imply enormous commitments of power (several gigawatts) and multi‑year construction programs for datacenter campuses. Analysts have interpreted those obligations to mean that Oracle may have to increase borrowing substantially to finance build‑outs. Caveat: the precise contours of Oracle’s financing plans—in particular the notion that it will need to borrow roughly $100 billion over four years—are analyst estimates rather than direct corporate guidance. These figures stem from models produced by investment banks and analysts that extrapolate construction, power provision, and hardware procurement costs against the reported size of the Oracle–OpenAI deal. Treat those borrowing‑needs figures as plausible scenario modeling, not a confirmed corporate pledge.

Why hyperscalers are spending so heavily now​

1) The GPU shortage and the scramble to secure supply​

GPUs and specialized AI accelerators—NVIDIA’s GB200/Blackwell line and emerging custom silicon—remain a chokepoint. Enterprises want predictable capacity for model training and inference; cloud builders want to avoid losing strategic customers to capacity constraints. That dynamic drives both an immediate need to buy short‑lived compute assets and a medium‑term push to expand datacenter regions. The result: spending on inventory and site construction at rates not seen since the last hyperscale build cycle.

2) Large, long‑duration customer contracts​

Big multi‑year contracts with AI firms and major enterprises convert into contracted backlog and remaining performance obligations—numbers that management can point to as revenue visibility. Those contracts justify multi‑year capex because revenue can be amortized over time. Oracle’s reported surge in future contract revenue and Google Cloud’s reported $155 billion backlog are examples of this effect. However, large, concentrated contracts also introduce counterparty risk if a customer’s growth assumptions prove optimistic.

3) Power and site economics have become central​

AI compute density raises power requirements dramatically. Providers now design datacenters around gigawatt scale commitments and complex energy procurement strategies—on‑site generation, PPAs, and, in some cases, nuclear or dedicated gas turbines. These requirements lengthen timelines and increase the capital intensity of each new region. Investors increasingly view data centers as integrated energy infrastructure projects, not just server sheds.

Technical breakdown: where the money goes​

  • Short‑lived compute (GPUs/CPUs, NVMe pools): These assets are expensive today and can be replaced on a 2–5 year cadence. Cloud vendors are buying ahead to lock in supply and to meet immediate training and inference demand.
  • Datacenter campuses (land, buildings, networking): Long lead times, multi‑year construction cycles and power interconnection costs. New builds in the U.S., Europe and India have been singled out in corporate updates.
  • Power infrastructure (PPAs, substations, on‑site generation): Siting decisions now hinge on available grid capacity or the ability to create it, which adds complexity and capex.
  • Specialized cooling and interconnect fabrics: Immersion cooling, direct liquid cooling, and high‑bandwidth switch fabrics add both capital and operational costs to handle tightly coupled, large‑model training runs.
  • Software and services integration: While less capital‑intensive by line‑item, integration, orchestration and security stacks require engineering investment to make the hardware productive and monetizable.

Financial and market implications​

For the companies​

  • Alphabet: A double‑edged proposition. Higher capex pressures free cash flow in the near term but supports product roadmaps (Vertex AI, TPUs, integrated search AI features) that can drive revenue and margin expansion over time if adoption continues. Management’s repeated guidance raises the question of diminishing returns—how much capacity can be monetized and how quickly?
  • Microsoft: The mix‑shift toward short‑lived assets suggests near‑term margin pressure but better ability to capture consumption economics if enterprise AI usage scales as hoped. CFO commentary frames these purchases as directly correlated to contracted backlog.
  • Oracle: A classic risk/reward trade‑off. The company’s pivot from software licensing to hyperscale cloud and AI infrastructure is being financed by significant debt markets activity. Large single‑customer deals can rapidly reprice the company’s valuation—but they also concentrate execution risk.

For investors​

  • The market has already reacted to the scale of capex: some stocks rose on the promise of future AI monetization; others have sold off when investors worried about profitability and capital intensity. Short‑term share price moves reflect a mix of optimism about long‑term revenue ramp and anxiety over immediate free cash flow and leverage. Market participants are increasingly sensitive to any mismatch between capacity buildup and near‑term monetization.

For the broader economy and suppliers​

  • Construction, energy, logistics and semiconductor suppliers stand to benefit from the buildout, creating a cascade of demand across multiple industries. At the same time, local infrastructure—electric grids, water supplies, permitting authorities—faces stress as multiple hyperscalers compete for limited resource headroom.

Risks, fragilities and downside scenarios​

1) Demand overshoot and the “AI capex bubble” concern​

Analysts caution that capex escalations could outpace real, monetizable demand for AI services. If enterprise adoption stalls or if model economics change (e.g., inference becomes cheaper but training demand flattens), hyperscalers may be left with underutilized, expensive assets and credit stress. This is the principal concern behind recent selloffs after aggressive capex announcements.

2) Concentration and counterparty risk​

Multi‑year, concentrated contracts (for example, the large reported Oracle–OpenAI arrangement) can create concentrated exposure. If a major customer renegotiates, defaults, or changes strategy, the provider that front‑loaded capacity could be left with costly, hard‑to‑repurpose assets. Note that many of the reported multi‑hundred‑billion deal figures are based on people‑familiar‑with‑the‑matter reporting; they should be treated as reported facts of media coverage rather than fully audited contractual disclosures.

3) Financing risk and leverage​

Companies that fund expansions through debt—bond offerings and leveraged balance‑sheet moves—face interest rate sensitivity and refinancing risk. Analyst models that forecast Oracle may have to borrow in the tens of billions annually to meet buildout expectations (e.g., an estimate of ~$25 billion per year, or ~$100 billion over four years) are speculative and hinge on assumptions about project scope, timing, and customer payment schedules. Those models indicate plausible funding paths but are not equivalent to confirmed borrowing plans.

4) Energy and permitting constraints​

Many proposed AI datacenter projects hinge on securing sizable, long‑term energy commitments. Local opposition, environmental constraints, and grid limitations can slow or derail projects, creating timing mismatches between purchased hardware and usable sites. Such slippage increases costs and compresses returns on capital.

What to watch next (short list)​

  • Corp disclosures on capex composition: Look for management breakdowns of capex into short‑lived compute vs. long‑lived facilities to understand cash‑flow and margin timing.
  • Backlog and remaining performance obligations: These metrics give multi‑year visibility that can justify heavy upfront investment. Watch how companies disclose and reconcile these figures.
  • Power procurement and PPA announcements: New, large‑scale PPAs or on‑site generation projects indicate the seriousness and feasibility of campus builds.
  • GPU supply chains and vendor commitments: Changes in GPU availability, pricing, or new entrants (custom silicon) will materially alter capex timing and absolute spend on short‑lived assets.
  • Contractual confirmations: Major reported deals (for example, the Oracle–OpenAI arrangement) will need clearer contractual disclosures and payment schedules to move from headline to investable reality. Treat media reports and analyst extrapolations as useful signals but seek company confirmations or SEC‑filed disclosures where possible.

Conclusion: a measured verdict​

The hyperscaler datacenter boom is both real and consequential. The largest cloud providers are responding to hard signals—enterprise adoption patterns, multi‑year contracts and an urgent need for more compute—by significantly ramping capital deployment. That response has legitimate economic rationale: capacity shortages are materially constraining revenue growth for cloud providers, and building ahead can lock in customers and revenue over time. Alphabet’s move to a $91–$93 billion capex envelope for 2025, Microsoft’s record quarterly capex, and Oracle’s debt‑financed expansion plans underscore how high the stakes have become. Yet the pace and scale of spending raise clear risks. The industry now faces execution, financing, and demand‑realization challenges that resemble past technology booms—where infrastructure was built in expectation of demand that did not materialize exactly as planned. Analyst scenarios that project heavy borrowing needs for Oracle to meet oversized commitments should be treated as warnings rather than settled facts: they highlight stress points, not certainties. Investors and policymakers should therefore treat today’s capex surge as a plausible engine of growth with embedded fragility that must be managed through contract design, financing discipline, and transparent disclosure.
The next 12–24 months will be decisive. If these infrastructure investments enable durable, high‑margin AI services and real enterprise transformation, today’s capex will look prescient. If not, the sector may confront a forced re‑rating and write‑downs. Either way, the datacenter buildout now under way is a defining episode in the AI era—one where engineering, finance, and energy policy intersect at unprecedented scale.
Source: theregister.com Alphabet capex to treble in two years amid datacenter boom
 

Back
Top