Tech companies’ public case that artificial intelligence can fix the climate now faces a sustained and systematic credibility test: a new analysis led by energy analyst Ketan Joshi finds that many of the green claims being used to defend rapidly expanding AI infrastructure are vague, poorly evidenced, and—critically—often conflate fundamentally different types of AI. The report, commissioned by advocacy groups including Beyond Fossil Fuels and Climate Action Against Disinformation and released during the AI Impact Summit in Delhi, reviewed 154 statements and concluded it could not identify a single instance where mainstream generative tools such as Google’s Gemini or Microsoft’s Copilot are delivering a material, verifiable, and substantial reduction in greenhouse-gas emissions. This challenges a high-profile narrative from big tech that positions AI as a major enabler of decarbonisation while the compute it relies on is driving a steep rise in electricity demand.
The debate sits at the intersection of two fast-moving trends: the explosive adoption of generative AI—large language models (LLMs), image and video synthesis, and multimodal agents—and the intensifying global push to slash greenhouse-gas emissions. Tech companies have leaned into the promise that AI tools can make energy systems, industrial processes, buildings and transport more efficient. These claims vary widely, from specific operational improvements (predictive maintenance, optimised logistics) to sweeping estimates that AI could mitigate a significant share of global emissions by 2030.
But the context has changed quickly. Data centre electricity consumption has surged with the rise of model training and inference workloads, and reputable energy forecasters now project dramatic increases in data-centre power demand over the rest of this decade. The International Energy Agency (IEA) finds that global data-centre electricity use, about 415 terawatt-hours (TWh) in 2024 (roughly 1.5% of global electricity), could more than double to around 945 TWh by 2030, largely driven by AI workloads; in many advanced economies, data centres will account for a disproportionate share of electricity demand growth.
At the same time, market analyses from independent energy researchers such as BloombergNEF warn that data-centre demand is poised to reshape national electricity systems: in the United States, BloombergNEF projects data centres could rise from roughly 3.5% of U.S. electricity demand today to about 8.6% by 2035, more than doubling their share and stressing grids and energy supply planning.
Microsoft and other big cloud providers have been less willing to discuss the net accounting challenges in public fora. Where companies disclose more granular data—region-by-region energy mixes, hourly carbon-intensity metrics, or model-specific power consumption—the picture becomes messier and the headline “AI will save X% of emissions” claims harder to sustain without clear caveats.
Source: The Guardian Claims that AI can help fix climate dismissed as greenwashing
Background
The debate sits at the intersection of two fast-moving trends: the explosive adoption of generative AI—large language models (LLMs), image and video synthesis, and multimodal agents—and the intensifying global push to slash greenhouse-gas emissions. Tech companies have leaned into the promise that AI tools can make energy systems, industrial processes, buildings and transport more efficient. These claims vary widely, from specific operational improvements (predictive maintenance, optimised logistics) to sweeping estimates that AI could mitigate a significant share of global emissions by 2030.But the context has changed quickly. Data centre electricity consumption has surged with the rise of model training and inference workloads, and reputable energy forecasters now project dramatic increases in data-centre power demand over the rest of this decade. The International Energy Agency (IEA) finds that global data-centre electricity use, about 415 terawatt-hours (TWh) in 2024 (roughly 1.5% of global electricity), could more than double to around 945 TWh by 2030, largely driven by AI workloads; in many advanced economies, data centres will account for a disproportionate share of electricity demand growth.
At the same time, market analyses from independent energy researchers such as BloombergNEF warn that data-centre demand is poised to reshape national electricity systems: in the United States, BloombergNEF projects data centres could rise from roughly 3.5% of U.S. electricity demand today to about 8.6% by 2035, more than doubling their share and stressing grids and energy supply planning.
What the analysis found: claims, evidence and “muddling”
Most green claims refer to traditional AI, not generative systems
One of the clearest patterns in Joshi’s analysis is that many public claims of AI’s climate promise refer to “traditional” machine-learning applications—predictive models, optimisation algorithms and narrowly scoped industrial controls—while simultaneously the industry’s public face has shifted toward generative AI, the compute-hungry models behind chatbots and image/video synthesis. That matters because the energy profiles and deployment pathways are very different: narrow predictive models can run on modest compute and produce targeted emissions reductions, whereas large generative models require enormous training and inference infrastructure that drives rapid data-centre expansion. The analysis flags this conflation as a core tactic that obscures the net environmental impact.Weak evidence base and over-reliance on corporate claims
The report found that only 26% of the green claims it examined cited published academic research, while 36% cited no evidence at all. Many influential statements trace back to consulting or corporate blog posts rather than peer-reviewed modelling with transparent methodology. That gap undercuts the ability of policymakers, investors and the public to determine whether asserted emissions savings are credible, scalable, or net of the additional emissions caused by AI infrastructure itself.No verifiable examples for mainstream generative tools
Perhaps the most headline-grabbing conclusion: among the 154 statements analysed, the authors did not find a single instance where popular generative AI products were demonstrably delivering large-scale, independently verifiable emissions reductions. Where corporate reports point to emissions “savings” enabled by AI, the analysis often finds those claims rest on small pilots, optimistic extrapolations, or internal client case studies rather than independent validation.Where the “5–10% by 2030” figure came from — and why it’s important to trace origins
A now-familiar claim—that AI could mitigate 5–10% of global greenhouse-gas emissions by 2030—has been widely circulated by companies and in summit presentations. Tracking that number reveals how a soft, consultancy-derived estimate can migrate into mainstream policy discourse.- Google cited a collaborative report with Boston Consulting Group (BCG) claiming AI has the potential to mitigate 5–10% of global emissions by 2030. Google’s public communications have repeatedly used that headline number in sustainability messaging.
- The figure can be traced back to earlier BCG material, including a 2021 BCG piece that states, in broad terms, “In our experience with clients, using AI can achieve overall emissions reductions of 5% to 10%.” That blog-style phrasing is explicit about relying on client experience rather than a transparent, reproducible modelling exercise.
The energy reality: data centres, LLMs and rising electricity demand
The IEA’s baseline projection
The IEA’s “Energy and AI” analysis is the most detailed public modelling so far of the AI–energy nexus. Its principal findings matter for any assessment of AI’s climate credentials:- Data-centre electricity consumption was roughly 415 TWh in 2024 (about 1.5% of global electricity).
- Under current trajectories, data-centre electricity demand could more than double to around 945 TWh by 2030, driven mainly by AI training and inference workloads. That’s roughly the present electricity use of a country the size of Japan.
Independent market forecasts reinforce the scale of demand
BloombergNEF (BNEF) concurs that data-centre demand is set to become a structural driver of electricity consumption. Its analysis projects that U.S. data centres could account for about 8.6% of national electricity demand by 2035, up from roughly 3.5% in 2024—an outcome that would exert major pressure on utility planning and generation mix choices. BNEF links much of this growth to hyperscale AI facilities and the concentration of compute power in a handful of cloud providers.What “a text query is a lightbulb-minute” hides
Industry and some researchers have attempted to contextualise per-inference energy use—arguing a single chat or translation request uses little energy, comparable to running a household lightbulb for a minute. That framing is technically correct for a simple inference on efficient infrastructure, but it is also misleading at scale: the billions of inferences, the repeated retraining of models, the rise of video and multimodal generation (which are orders of magnitude more computationally expensive), and the infrastructure footprint for storage and networking compound into systemic demand. In short: tiny per-query footprints do not guarantee a small system-wide impact when deployment multiplies and complexity escalates.Evidence gaps, methodology problems and the risk of greenwashing
Where the evidence is thin
The Joshi analysis documents recurring weaknesses in corporate and consultancy claims:- Lack of transparent methodology or reproducible calculations.
- Reliance on selective pilots or vendor-led case studies that are not independently audited.
- Failure to report net impacts (i.e., claimed emissions reductions often neglect to include the emissions caused by additional compute, data-centre build-out, and hardware manufacturing).
- Use of aggregate, aspirational language—e.g., “AI can help accelerate decarbonisation”—without quantifying baselines, offsets, or alternative non-AI interventions.
Why “muddling” AI types matters
Grouping together distinct AI technologies—predictive ML models versus large generative models—allows companies to point to low-energy wins while continuing to build and deploy energy-heavy systems. That rhetorical conflation risks presenting a package deal to regulators and the public: accept increased compute and datacentre growth because somewhere in our AI toolkit we will realise emissions savings. The new report calls that a diversionary tactic akin to the fossil-fuel industry’s classic greenwashing moves—promoting a small clean investment while expanding the core polluting business.The rebound problem and systems thinking
Even when an AI model reduces intensity—say, optimising a logistics network—the rebound effect can erode or reverse benefits. Efficiency gains often lower operating costs, which can spur increased production, consumption or additional digital services that use energy. Without system-level accounting (including life-cycle emissions from servers, chips and data-centre construction), claims of net emissions avoided are fragile. Rigorous, independent life-cycle assessments (LCAs) and standardised reporting are still scarce.Industry responses and the limits of voluntary transparency
Major cloud providers and hyperscalers have published sustainability goals, carbon-intensity metrics, and case studies showing AI-enabled optimisations in grids, buildings and manufacturing. Google, for example, has argued its estimated emissions reductions are based on a “robust substantiation process” and has published principles and methodologies for assessing AI’s climate benefits. But critics point out that company methodologies are often opaque, framed to highlight favourable examples while omitting countervailing impacts, and that corporate reports sometimes republish consulting estimates without independent validation.Microsoft and other big cloud providers have been less willing to discuss the net accounting challenges in public fora. Where companies disclose more granular data—region-by-region energy mixes, hourly carbon-intensity metrics, or model-specific power consumption—the picture becomes messier and the headline “AI will save X% of emissions” claims harder to sustain without clear caveats.
Policy implications and what responsible governance should demand
The gap between corporate messaging and independently verifiable evidence suggests several concrete policy and governance responses that would improve public understanding and reduce the risk of greenwashing:- Standardised reporting: Mandate life-cycle emissions accounting and standard metrics for AI services, including embodied emissions from hardware manufacture and disposal, training and inference energy use, and energy sourcing. Public, machine-readable disclosures should be the default.
- Independent auditing: Require third-party verification of claims that are used in regulatory, procurement or investment decisions—especially when companies cite those claims to justify capacity expansion or permitting.
- Grid-aware siting and permitting: Treat hyperscale AI facilities like other major electricity consumers—require integrated resource planning that assesses local grid capacity, dispatchable generation needs, and the carbon intensity of additional power supplies.
- Demand-side controls: Incentivise or require energy-efficiency standards for AI compute (e.g., efficiency targets for training jobs, preference for lower-carbon inference deployment models).
- R&D and procurement leverage: Use government and corporate procurement to favour energy-efficient model architectures and to fund research into lower-cost, lower-carbon accelerators and cooling technologies.
- Prudential limits and moratoria where appropriate: In jurisdictions facing rapid, unplanned grid strain, temporary moratoria or stricter permitting can prevent lock-in of fossil-fuelled generation and allow time for credible carbon accounting frameworks to be adopted.
Practical guidance for buyers, investors and climate teams
For corporate sustainability officers, procurement managers and climate-minded investors, the headline advice is straightforward: demand transparency, insist on net-impact accounting, and prefer verified interventions over abstract claims.- Insist on transparent methodologies. When a vendor or consultant claims an emissions reduction, require the baseline, counterfactual, data sources, assumptions, and whether the figure is gross or net.
- Ask for independent verification. Third-party audits, peer-reviewed LCA reports, or academic validation should accompany any large-scope claim.
- Consider avoided versus induced demand. Evaluate whether an AI deployment will reduce overall emissions, or merely shift them (for example, by enabling new energy-intensive services).
- Prefer targeted, low-compute interventions where appropriate. Many operational improvements (sensor-driven process controls, better HVAC optimisation, modest predictive maintenance models) can deliver measurable reductions with small compute footprints.
- Factor in embodied carbon. Procurement decisions should account for server and hardware manufacturing emissions and consider reuse, remanufacture and longer equipment lifetimes.
Strengths, risks and the path forward
Notable strengths in the pro-AI climate argument
- AI can deliver genuine efficiencies in energy systems, logistics, and industrial controls when deployed for specific, validated use cases.
- There are promising research and pilot projects showing measurable operational benefits—fault detection in grids, predictive maintenance in heavy industry, and smarter demand forecasting for renewables integration.
- The open-source community and some research groups are working on more energy-efficient model architectures and tools to measure compute and carbon footprints more precisely.
Significant risks and red flags
- Greenwashing risk: When high-profile corporations reuse consultant estimates lacking transparent methods, they can create a false sense of progress that masks real emissions growth from data-centre expansion.
- Scale mismatch: Small, validated emissions wins do not automatically scale into system-level decarbonisation if the compute required to enable or multiply services drives a larger emissions rise.
- Resource and grid strain: Rapid data-centre buildout without coordinated planning risks locking in additional fossil-fuel generation—especially in regions where new renewables and storage cannot be deployed quickly enough to meet surge demand.
- Opaque corporate methodologies: Without common standards for claims, it is difficult to separate robust, reproducible evidence from marketing.
Conclusions and actionable recommendations
The new analysis does not argue that AI is inherently bad for the climate. Rather, it calls for honesty and rigour: distinguish the narrow, low-compute AI interventions that can produce demonstrable efficiency gains from the high-compute generative services that are expanding data-centre demand; make methodologies public; and require independent verification before treating optimistic figures as policy or investment-grade evidence.- Regulators and purchasers should require standardised, third-party-verified accounting for AI’s climate claims.
- Tech companies should stop conflating different AI modalities in public communications and publish model-level energy and emissions metrics that include embodied and operational emissions.
- Investors should incorporate rigorous LCA and grid-impact assessments into their due diligence on AI-related infrastructure projects.
- Researchers and funders should prioritise low-energy model designs and measurement standards so that the community can objectively evaluate trade-offs between capability and carbon.
Source: The Guardian Claims that AI can help fix climate dismissed as greenwashing
