Data centers are drinking the equivalent of small cities, and that thirsty footprint is only set to grow as artificial intelligence and hyperscale computing accelerate demand — but the obvious fixes (shutting down servers or moving to cooler climates) miss a more powerful lever: where facilities are sited and how they’re powered and cooled. Recent analyses show that indirect water tied to electricity generation dwarfs on‑site cooling use, and that careful siting, closed‑loop cooling, non‑potable sourcing, and policy changes together can shrink a data center’s water and climate footprint by orders of magnitude.
Data centers cool heat-generating servers with water in many designs, but that direct consumption is only half the picture — the electricity used to run racks also carries a water footprint because most power generation consumes water, especially thermoelectric plants and reservoir-based hydro. In 2023 a major national analysis estimated U.S. data centers used roughly 17 billion gallons of water on‑site and 211 billion gallons indirectly via electricity generation — an indirect-to-direct ratio of more than ten to one. Those central numbers are not theoretical; they come from a national lab assessment that underpins much of the current policy discussion. That scale matters: with the industry expanding to serve AI — which concentrates sustained GPU workloads and higher thermal density — models project dramatic increases in both electricity and water demand. The Berkeley Lab analysis and related reporting project that data‑center electricity demand could rise from the low single digits of U.S. consumption today toward double‑digit percentages by the end of the decade, and that water withdrawals tied to power and cooling could multiply unless action is taken.
But engineering alone won’t be enough: governments must update permitting, condition incentives on verified environmental performance, and require transparent, auditable metrics. Operators must pair technical innovations with firm, locality‑aware procurement choices and community agreements. When those pieces line up — smart siting, clean power, low‑evaporation cooling, and rigorous transparency — data centers can continue to scale AI and cloud services without draining cities, aquifers, or the climate budget.
Source: WhoWhatWhy How to Make Data Centers Less Thirsty - WhoWhatWhy
Background: the scale of the problem
Data centers cool heat-generating servers with water in many designs, but that direct consumption is only half the picture — the electricity used to run racks also carries a water footprint because most power generation consumes water, especially thermoelectric plants and reservoir-based hydro. In 2023 a major national analysis estimated U.S. data centers used roughly 17 billion gallons of water on‑site and 211 billion gallons indirectly via electricity generation — an indirect-to-direct ratio of more than ten to one. Those central numbers are not theoretical; they come from a national lab assessment that underpins much of the current policy discussion. That scale matters: with the industry expanding to serve AI — which concentrates sustained GPU workloads and higher thermal density — models project dramatic increases in both electricity and water demand. The Berkeley Lab analysis and related reporting project that data‑center electricity demand could rise from the low single digits of U.S. consumption today toward double‑digit percentages by the end of the decade, and that water withdrawals tied to power and cooling could multiply unless action is taken. Why indirect water use dominates
Direct cooling vs. embedded (grid) water
- Direct use: evaporative cooling towers, some closed-loop systems, and auxiliary facilities draw municipal or on‑site water for cooling and humidification.
- Indirect use: water consumed at power plants to produce the electricity that powers servers and cooling systems. Thermoelectric plants use water for condensers; hydro reservoirs lose water to evaporation; even some renewables have lifecycle water impacts.
Hydropower is low‑carbon but not always low‑water
Hydropower is often sold as the low‑carbon answer, and in regions with abundant hydro the electricity price can be attractive — yet reservoirs’ surface evaporation is a real consumptive loss that must be counted. That means a data center that appears low‑carbon at the meter may nonetheless increase consumptive water loss regionally, a nuance too often omitted from simple sustainability claims. Recent analyses warn that Pacific Northwest hydropower, while cheap, can raise water stress when the local hydrology and reservoir footprints are considered.Location: the single most powerful decision
Cornell’s finding: location can change footprints by orders of magnitude
A Cornell University analysis led by energy‑systems researchers shows that location choice alone can change a combined water-and-carbon footprint by as much as two orders of magnitude. The reason is straightforward: some regions pair abundant, low‑water renewable electricity (wind/solar) with low local water stress, while others depend on water‑intensive thermal or reservoir generation. In that model, parts of West Texas, plus states like Nebraska, South Dakota, and Montana, score consistently well because they combine strong wind/solar resources with low grid water intensity and sparse population. That does not mean these places are risk‑free — groundwater, permitting, and local community impacts still matter — but from a pure grid‑water perspective the gains are large.Where data centers are today — and why
Historically, data centers clustered for fiber density, tax incentives, and proximity to workforce or government: Virginia, Northern California, and parts of Texas host large clusters. Texas today is home to hundreds of centers — attractive for land, transmission corridors, and a pro‑development policy environment — but many new builds are in water‑stressed regions, pushing local utilities and aquifers. The mismatch between current siting incentives and “best environmental siting” is a policy gap the industry and regulators must close.Cooling technologies: tradeoffs and opportunities
Evaporative towers (traditional): cheap energy, thirsty climate
Evaporative cooling towers historically deliver low initial capital and modest operational energy cost in hot climates, but they rely on continual water makeup to replace evaporative loss. In water‑scarce regions this creates obvious municipal supply risks and political friction.Air economization and air‑first designs: minimize potable draws
Air‑economization (free‑air cooling) uses ambient air when conditions permit and can eliminate most potable water use for large parts of the year in cool/dry climates. The “air‑first” strategy — default to air cooling and only use water in exceptional conditions — is widely promoted and can be codified in permits to protect communities. However, air economization’s effectiveness depends on local humidity and seasonal extremes, and unprecedented heat waves can force fallback modes.Closed‑loop chip‑level liquid cooling and immersion: near‑zero evaporation, higher electricity
Chip‑level closed‑loop liquid cooling and immersion systems dramatically reduce evaporative water losses because coolant circulates in a sealed circuit and requires little makeup after initial filling. Microsoft and others have announced next‑generation designs that target near‑zero evaporative water use at the rack level and plan pilots in Phoenix and Mount Pleasant. Those designs can cut local potable water requirements per facility by tens to hundreds of millions of liters per year. But there’s a tradeoff: closed‑loop mechanical chillers often increase electrical demand slightly compared with evaporative towers in some climates, and higher electricity raises indirect water and carbon footprints if the grid is not clean. Microsoft’s engineering teams acknowledge this energy tradeoff and emphasize pairing closed‑loop cooling with low‑water, low‑carbon power procurement.Immersion cooling — promise and constraints
Immersion cooling immerses electronics in dielectric fluids and eliminates fans and many water‑based components, achieving superior thermal performance and lower operating energy for dense racks. Adoption faces hardware redesign, maintenance training, and considerations about coolant lifecycle and leakage risk. Where density and waste‑heat reuse are priorities, immersion can be attractive — but like closed‑loop liquid cooling it must be combined with clean power to avoid shifting the burden to the grid.Beyond hardware: three operational levers that shrink footprints
- Power procurement and timing
- Pairing sites with firm, low‑water power (wind/solar plus storage or low‑water firming) reduces indirect water. Long‑term PPAs tied to local renewable projects, and contracts that ensure additionality and firming, are more effective than mere certificate purchases.
- Time‑shifting heavy training to periods of low grid water intensity (and low marginal carbon) further reduces embedded water and emissions.
- Model efficiency and software optimization
- Techniques such as quantization, pruning, distillation, and specialized lightweight models can cut compute by orders of magnitude for routine tasks. Reducing FLOPs per useful output is one of the most cost‑effective, low‑regret ways to cut both energy and water footprints.
- Heat reuse and industrial symbiosis
- Waste heat capture and delivery into district heating or industrial process loops can turn a waste stream into a community asset and recover value, changing the lifecycle calculus for cooling choices. Google’s Hamina example and other district heating synergies illustrate this potential when local partners exist.
Policy and governance: closing the accountability gap
Metering, reporting, and enforceable permits
Municipalities and regulators often lack the data to verify corporate claims. Requiring dedicated volumetric metering, public reporting of Water Usage Effectiveness (WUE) and Power Usage Effectiveness (PUE), and separating construction vs operational draws in permits are core recommendations that recurring coverage and expert panels endorse. Without metered, auditable disclosures, communities cannot hold operators to conditional commitments such as “municipal water only above X°C.”Conditioning incentives on audited environmental performance
Tax incentives, land deals, and public procurement should be tied to verified reductions in water per unit compute and lifecycle carbon metrics, not just job counts. Conditioning public support on technology roadmaps (e.g., transitions to closed‑loop cooling or non‑potable sourcing within set timelines) aligns public policy with stewardship goals and reduces the risk of stranded community costs later.Regional planning and grid coordination
Utilities and planners must consider data centers as system‑level assets when approving interconnections: large, synchronous loads affect capacity, ramping, and the fuel mix used to firm renewables. Coordinated planning can favor siting that minimizes water and emissions impacts and reduce the temptation to rely on local fossil firming during peaks.Groundwater and non‑potable sourcing: hidden local stakes
Even if a region’s grid has low water intensity, on‑site cooling often draws municipal or groundwater resources. West Texas, for example, scores well on grid water intensity because of abundant wind, but reliance on aquifers or local groundwater for evaporative systems raises long‑term sustainability questions. Non‑potable alternatives — treated wastewater, industrial effluent, or onsite recycled condensate — can displace potable supplies, but require treatment, regulatory approval, and community buy‑in. Any siting strategy must reconcile grid benefits with local hydrological realities.Corporate commitments: progress, spin, and the need for independent verification
Major cloud providers have announced ambitious targets — “water positive” pledges, near‑zero evaporative designs, or fleet-level WUE improvements. The technical innovations are real and meaningful, but experience shows that promises without meter‑backed reporting and third‑party audits are insufficient. Several notable cases globally reveal gaps between initial low estimates and later measured use during heatwaves or ramp‑ups. That’s why auditors, municipalities, and civil society stress binding, permit‑level commitments plus public, facility‑level metrics.Practical recommendations for operators and municipalities
- Require dedicated volumetric meters for any large potable allocation and publish the data (hourly or daily).
- Make air‑first, non‑potable sourcing, and closed‑loop transitions enforceable in permits with clear thresholds and audit windows.
- Condition public incentives on audited lifecycle WUE and PUE metrics and on demonstrable plans for local non‑potable sourcing or closed‑loop cooling.
- Invest in model efficiency R&D and require procurement programs to prefer providers that disclose audited compute‑per‑unit metrics.
- For operators: choose locations that align with both grid‑level water and carbon advantages and local water sustainability; pair closed‑loop cooling with firm low‑water power procurement.
Tradeoffs and risks: the hard truths
- Energy vs. water tradeoff: moving from evaporative cooling to electric chillers reduces local water but increases electricity. If that electricity is dirty or water‑intensive at the source, net gains can disappear; siting and procurement resolve this tension.
- Rebound effect: efficiency can lower unit costs of compute, encouraging more consumption and potentially increasing absolute water and energy use if capacity grows unchecked.
- Local social license: even low‑water sites can strain municipal services (roads, workforce housing, school budgets) if community agreements are weak.
- Groundwater depletion: tapping deep aquifers may be feasible short‑term but unsustainable without responsible aquifer management and monitoring.
Case studies and emerging wins
- Microsoft’s next‑generation closed‑loop, chip‑level cooling design promises significant reductions in evaporative loss and is being rolled into new designs after pilots are set for 2026, with a company‑reported WUE improvement target and an average fleet WUE metric reported publicly. The move demonstrates how engineering and procurement must co‑evolve.
- Google’s Hamina facility and other European examples show how seaside or district‑heating synergies and seawater cooling can reduce both emissions and local potable draw via industrial symbiosis; these projects require specific geographic and regulatory contexts to work.
- Research teams at Cornell and Purdue have contributed frameworks that move the debate beyond single metrics: Cornell models illustrate how location + grid mix determine water and carbon outcomes; Purdue and HotCarbon work stress water‑stress weighted metrics so that a unit of water in a high‑stress region carries greater weight. Together, these approaches help policymakers and operators rank and compare sites more meaningfully.
Conclusion: align incentives, design better, and site smarter
Data centers are the new heavy industry: capital‑intensive, geographically concentrated, and powerful determinants of local resource stress and national grid dynamics. The good news is that solutions exist at every layer — hardware, software, procurement, and policy — and they are complementary. Siting servers near dry renewables, investing in closed‑loop cooling, switching to non‑potable water sources, enforcing metered reporting, and reducing compute per useful output together reshape the industry’s trajectory.But engineering alone won’t be enough: governments must update permitting, condition incentives on verified environmental performance, and require transparent, auditable metrics. Operators must pair technical innovations with firm, locality‑aware procurement choices and community agreements. When those pieces line up — smart siting, clean power, low‑evaporation cooling, and rigorous transparency — data centers can continue to scale AI and cloud services without draining cities, aquifers, or the climate budget.
Source: WhoWhatWhy How to Make Data Centers Less Thirsty - WhoWhatWhy