Both Meta and Microsoft are answering the same skeptical question from investors and communities: why double down on sprawling, power-hungry AI data centers right now — and do the numbers add up? //www.costar.com/article/269837260/meta-microsoft-execs-defend-move-to-double-down-on-record-data-center-spending)
The past six months have seen hyperscalers shift from tentative AI experiments to an all‑out infrastructure sprint. Microsoft reported record capital expenditures of $37.5 billion in its fiscal second quarter, a 66% year‑over‑year jump the company says was largely on short‑lived compute assets such as GPUs and CPUs as well as long‑lived data‑center sites. Management and investors were told this pace could put Microsoft on track for roughly $120 billion of capex in the fiscal year ending June 30. (microsoft.com)
At the same time Meta (owner of Facebook, Instagram and WhatsApp) posted a powerful advertising quarter — $59.9 billion in revenue for Q4 2025 — and announced capex guidance of $115–$135 billion for 2026, citing a major step‑up in spending on AI infrastructure and data centers to support its “Meta Superintelligence” efforts. Executives framed the investments as necessary to ship new AI products that will drive future monetization.
The scale is staggering: together, these two firms — plus peers like Google and Amazon — are pushing annual AI‑era data‑center and compute investment toward the trillions when you sum multi‑year commitments across chip vendors, power projects and real estate. That reality is reshaping markets, energy policy and local politics, and it’s forcing investors to reframe valuation models around long payback horizons and utilization metrics rather than short‑term margin optics.
Key investor worries:
Meta has taken similar energy steps — from multi‑gigawatt power deals to partnerships on nuclear projects and fiber procurement — to secure long‑term power and network capacity. These power deals are expensive and extend the effective capex footprint well beyond servers and buildings.
But the strategy is capital‑intensive, politically charged and exposed to timing risk. The near term will be noisy: stock volatility, local pushback, supply‑chain bottlenecks and intense vendor competition. The difference between a visionary and a mistake will be measured in utilization and revenue per unit of compute over the next 12–36 months. Watch the leading indicators — Azure AI revenue mix, Copilot ARPU and retention, commercial bookings conversion, and GPU‑hour economics — and treat corporate pledges (like Microsoft’s “good neighbor” plan) as risk‑mitigation moves that still require enforcement and community scrutiny. (microsoft.com)
This is not simply an IT infrastructure story; it’s a macroeconomic and civic one. As Big Tech turns capital markets into grid and fiber markets, the outcome will shape where jobs, energy projects and even tax revenues land for years to come. For enterprises, communities and investors alike, the right posture is pragmatic curiosity: encourage the build‑out where it aligns with local priorities, demand transparent metrics that prove the economics, and stress‑test bets against prolonged underutilization. The AI data‑center era is arriving — whether it pays off will depend on execution, utilization and a string of local and global choices that remain very much in play.
Source: CoStar https://www.costar.com/article/2698...o-double-down-on-record-data-center-spending/
Background
The past six months have seen hyperscalers shift from tentative AI experiments to an all‑out infrastructure sprint. Microsoft reported record capital expenditures of $37.5 billion in its fiscal second quarter, a 66% year‑over‑year jump the company says was largely on short‑lived compute assets such as GPUs and CPUs as well as long‑lived data‑center sites. Management and investors were told this pace could put Microsoft on track for roughly $120 billion of capex in the fiscal year ending June 30. (microsoft.com)At the same time Meta (owner of Facebook, Instagram and WhatsApp) posted a powerful advertising quarter — $59.9 billion in revenue for Q4 2025 — and announced capex guidance of $115–$135 billion for 2026, citing a major step‑up in spending on AI infrastructure and data centers to support its “Meta Superintelligence” efforts. Executives framed the investments as necessary to ship new AI products that will drive future monetization.
The scale is staggering: together, these two firms — plus peers like Google and Amazon — are pushing annual AI‑era data‑center and compute investment toward the trillions when you sum multi‑year commitments across chip vendors, power projects and real estate. That reality is reshaping markets, energy policy and local politics, and it’s forcing investors to reframe valuation models around long payback horizons and utilization metrics rather than short‑term margin optics.
What the headline numbers actually say
Microsoft: build fast, prioritize compute
- Q2 FY26 capex: $37.5 billion (up 66% YoY). Two‑thirds of that was for short‑lived assets (GPUs/CPUs) while the rest was for long‑lived real estate and leases. (microsoft.com)
- Quarterly revenue: $81.3 billion; Azure growth ~39%. Microsoft also disclosed a commercial backlog (remaining performance obligation) of $625 billion, with roughly 45% tied to OpenAI‑related commitments. (microsoft.com)
- Microsoft disclosed for the first time a user metric for its productivity AI: Microsoft 365 Copilot has 15 million paid seats, up strongly quarter‑over‑quarter — a core commercialization vector for enterprise AI. (microsoft.com)
Meta: catching up (and leaning in)
- Q4 2025 revenue: $59.9 billion (up ~24% YoY), driven by advertising demand and AI‑driven performance improvements. Net income and operating metrics strengthened even as expenses rose.
- Capex guidance for 2026: $115–$135 billion, described as investments in data centers, GPUs, network hardware (e.g., fiber deals) and talent for Meta Superintelligence Labs. Management says these investments are needed to deliver the next generation of AI products and ad performance gains.
- Meta has also been active on acquisitions and partnerships to accelerate model development and supply chains — moves that increase near‑term cash outlays but signal an attempt to move from follower to leader in model development.
Why executives say this spending is necessary
1) AI is capacity‑intensive and front‑loaded
Large language models and multimodal systems consume orders of magnitude more compute during training and still require dense GPU footprints for low‑latency inference at scale. Executives argue that the only way to reduce training time and control inference costs is to secure the hardware, networking and power now — and to lock in supply chains (chips, fiber, transformers) that have long lead times. Microsoft and Meta both described shortages in compute capacity as a binding constraint on product rollouts and revenue growth. (microsoft.com)2) Productization of AI (Copilot, agents, personalized assistants)
Both firms are trying to move AI from a demo to recurring SaaS revenue streams:- Microsoft is packaging Copilot as a seat‑based add‑on to Microsoft 365 and reported early traction (15 million paid seats) as proof that enterprises will pay for productivity AI. This generates recurring, high‑margin revenue once infrastructure is in place. (microsoft.com)
- Meta aims to monetize better ads and personal superintelligence products — the precise details of which are still being revealed — but management claims AI‑driven ad performance was a key driver of the Q4 numbers.
3) Strategic positioning and defensibility
Owning data‑center capacity and custom networking gives these companies control over performance, latency and costs. It also secures leverage with chip vendors and energy suppliers — a crucial bargaining chip in an arms race for GPUs and developer talent. Both firms argue that the scale advantage will translate into differentiated products and margins over time. (microsoft.com)The investor calculus: near‑term pain vs long‑term payoff
Executives and some analysts frame the spending as a long‑horizon, infrastructure‑heavy investment with multi‑year payback. But the market reaction shows the tension: Meta’s stock rose on its revenue beat and optimistic outlook for AI‑driven ad gains, whereas Microsoft’s stock dropped after the quarter as investors focused on capex and a slight deceleration in Azure growth. That split captures a nuanced view: growth plus clear monetization signals can soothe investors, but capex without immediate utilization certainty increases risk premia.Key investor worries:
- CapEx outpacing revenue growth: When capex grows faster than the top line, payback periods lengthen and margins can be pressured in the medium term. Analysts singled this out on Microsoft’s call. (microsoft.com)
- Capacity‑to‑demand timing mismatch: Building sites and energizing them takes months to years; if demand softens or alternative architectures (e.g., on‑prem custom chips, neoclouds) emerge, hyperscalers risk underutilized assets.
- Vendor concentration and pricing pressure: The GPU supply chain currently favors a handful of vendors. Price moves by Nvidia or memory/transformer supply disruptions can blow out budgets.
Local impacts and the politics of power, water and land
The data‑center build‑out is not merely a corporate balance‑sheet story — it is a local community and policy story. As AI campuses appear in rural counties from Idaho to Louisiana, residents and officials have raised concerns about:- Electricity prices and grid strain
- Water consumption for cooling
- Road, labor and housing pressures during construction
- Tax and incentive negotiations
Meta has taken similar energy steps — from multi‑gigawatt power deals to partnerships on nuclear projects and fiber procurement — to secure long‑term power and network capacity. These power deals are expensive and extend the effective capex footprint well beyond servers and buildings.
Why this matters for IT and infrastructure planners
- Local utilities and regional grid operators must plan for multi‑gigawatt load additions, which requires years of transmission and generation projects.
- Corporations and municipalities will negotiate rate classes and infrastructure funding models that can shift costs between taxpayers, ratepayers and private firms.
- Enterprise IT buyers should treat hyperscaler capacity as both a benefit (scale, new AI services) and a potential constraint (capacity rationing, price fluctuation).
Strengths of the strategy
- Scale and lock‑in: Owning both software and the physical stack reduces latency and enables differentiated optimizations (custom silicon, networking). Microsoft’s Copilot attach rates and Azure growth are real early signs of monetization. (microsoft.com)
- Diversified monetization paths: Microsoft can monetize via seat‑based Copilot fees, Azure compute hours, and platform meters; Meta monetizes AI via ad price/performance gains and potential new consumer products. This diversity helps spread payback risk. (microsoft.com)
- Strategic supply‑chain moves: Long‑term fiber deals, power purchase agreements, and model‑lab investments reduce the risk of being supply‑constrained for crucial inputs. Meta’s Corning deal and Microsoft’s utility partnerships are examples.
Principal risks and blind spots
- Underutilized capacity risk: If enterprise AI adoption or pricing evolves more slowly than anticipated, hyperscalers could carry large, depreciating assets for years. This is the core investor fear expressed around Microsoft’s Q2 results.
- Concentration risk: Heavy dependence on a few chip suppliers or a single AI partner can create bargaining leverage for those suppliers and operational fragility for hyperscalers. Microsoft’s large OpenAI backlog is a case in point.
- Regulatory and political pushback: Local resistance to data‑center expansion, national debates over grid access, and potential tax or environmental limits could impose delays and additional costs. The “good neighbor” pledges acknowledge but do not eliminate this risk.
- Timing and market expectations: Investors have shorter horizons than the decades‑long depreciation cycles of data centers. Absent clear utilization signals (bookings, attach rates, revenue per GPU‑hour), stock moves can overreact. Analysts recommend watching leading indicators rather than raw capex.
How to judge progress: the metrics that matter
When capex is large and returns take time, the sensible framework is to monitor tight, leading indicators that indicate whether investment converts into sticky revenue. Watch these:- Azure growth and the AI contribution to Azure revenue (percent of Azure driven by AI workloads).
- Microsoft 365 Copilot attach rate, daily active usage, conversations per user, and churn for paid seats. Microsoft disclosed 15 million paid seats as a new milestone — track attach and ARPU from here. (microsoft.com)
- Utilization and revenue per GPU‑hour (or equivalent) — essentially, how much revenue each GPU produces after capex and opex.
- Commercial bookings and multi‑year commitments (RPO/contract backlog) that convert into realized revenue over time. Microsoft’s $625 billion backlog is material and should be decomposed by counterparty and duration. (microsoft.com)
- Capex split: short‑lived (compute) vs long‑lived (real estate). The mix determines how quickly investments need to monetize. (microsoft.com)
- Ad price and impression trends (are AI improvements sustainably increasing yield?)
- Progress on model‑to‑product conversion: are Meta’s internal models showing measurable performance advantages that advertisers are willing to pay for?
- Execution on large procurement deals (fiber, power) and whether those deals reduce marginal costs or simply lock in supply at higher absolute cash outflows.
Practical takeaways for IT pros, local leaders and investors
- IT leaders should treat hyperscaler capacity as a strategic lever but remain cautious about SLAs during capacity constraints: expect rationing, premium pricing and multi‑vendor strategies to be rational responses. (microsoft.com)
- Local communities should negotiate concrete, enforceable commitments (not just pledges) around rates, water replenishment, workforce training and tax contributions before permitting large campuses. Microsoft’s Community‑First pledges are a step forward, but local officials must translate them into binding agreements.
- Investors must normalize expectations: hyperscaler AI is now a multi‑decade infrastructure play. Focus on attach rates, utilization economics and long‑term contract rollups, not only headline capex. Some analysts argue the build‑out is bold and necessary; others highlight the timing and utilization uncertainty that justifies temporary valuation discounting.
Conclusion — a disciplined “yes” with caveats
Microsoft and Meta have chosen scale and speed as their playbooks for the AI era. Their logic is internally consistent: secure compute and connectivity now, productize AI (Copilots, agents, ad‑optimization), then monetize at scale. Early signals — Copilot seat growth, strong ad revenue and growing backlogs — justify optimism that the investments can be productive. (microsoft.com)But the strategy is capital‑intensive, politically charged and exposed to timing risk. The near term will be noisy: stock volatility, local pushback, supply‑chain bottlenecks and intense vendor competition. The difference between a visionary and a mistake will be measured in utilization and revenue per unit of compute over the next 12–36 months. Watch the leading indicators — Azure AI revenue mix, Copilot ARPU and retention, commercial bookings conversion, and GPU‑hour economics — and treat corporate pledges (like Microsoft’s “good neighbor” plan) as risk‑mitigation moves that still require enforcement and community scrutiny. (microsoft.com)
This is not simply an IT infrastructure story; it’s a macroeconomic and civic one. As Big Tech turns capital markets into grid and fiber markets, the outcome will shape where jobs, energy projects and even tax revenues land for years to come. For enterprises, communities and investors alike, the right posture is pragmatic curiosity: encourage the build‑out where it aligns with local priorities, demand transparent metrics that prove the economics, and stress‑test bets against prolonged underutilization. The AI data‑center era is arriving — whether it pays off will depend on execution, utilization and a string of local and global choices that remain very much in play.
Source: CoStar https://www.costar.com/article/2698...o-double-down-on-record-data-center-spending/