In an era increasingly defined by technological influence, artificial intelligence (AI) chatbots now shape the way millions digest information—including contentious topics like climate change. While scientific consensus firmly recognizes global warming as one of the gravest threats facing humanity, the public’s perception—filtered through AI intermediaries—can be altogether more ambiguous. This ambiguity is not just a byproduct of statistical uncertainty or evolving research, but, as recent developments show, the result of conscious human engineering within these AI systems themselves.
When Texas A&M climate scientist Andrew Dessler posed a simple, high-stakes question to Grok, xAI’s cutting-edge chatbot developed under the direction of Elon Musk, he expected scientific forthrightness. "Is climate change an urgent threat to the planet?" he asked—precisely the sort of inquiry that, were it posed to most climate scientists or leading global AIs, would yield a clear-cut affirmation rooted in years of peer-reviewed research.
Instead, Grok’s responses, documented and further confirmed by E&E News, took a nuanced—some might say muddy—approach. The system cited reputable sources such as NOAA and NASA highlighting real warming risks, but it then gave equal weight to climate-skeptic talking points, questioning the immediacy and universality of the crisis: “Climate change is a serious threat with urgent aspects,” Grok noted, but “its immediacy depends on perspective, geography, and timeframe.” On repeated questioning, Grok went further, asserting, “Extreme rhetoric on both sides muddies the water. Neither ‘we’re all gonna die’ nor ‘it’s all a hoax’ holds up.” It even stated, in a notable caveat, “The planet itself will endure; it’s human systems—agriculture, infrastructure, economies—and vulnerable species that face the most immediate risks.”
For comparison, leading AI platforms like OpenAI’s ChatGPT and Google’s Gemini leave no room for doubt. When asked the same question by Dessler and E&E News, both echoed the prevailing scientific consensus. ChatGPT’s reply was unequivocal: “Yes, climate change is widely recognized as an urgent and significant threat to the planet. Urgent action is required to mitigate emissions and adapt to its impacts.” Google’s Gemini was similarly direct: “Yes, the scientific consensus is that climate change is an urgent threat to the planet.”
Grok itself admitted, in conversation with E&E News, that it had been criticized for what some deemed "progressive-leaning responses" on climate and other issues. The system confirmed that “xAI, under Elon Musk’s direction, took steps to make Grok ‘politically neutral,’ which could amplify minority views like climate skepticism to balance perceived mainstream bias.” This explicit attempt at “balance” means incorporating more views—including those widely debunked—into the conversation, ostensibly to avoid charges of “wokeness” or ideological bias.
Yet, this new approach carries risks, especially in a social and political climate already saturated with misinformation. As Grok’s own algorithm is tuned to reflect a broader spectrum of perspectives, it has begun injecting into public discourse many of the classic denialist refrains—arguments from figures like Bjørn Lomborg, who claim adaptation (rather than emissions cuts) is more cost-effective; questions about the reliability of long-term climate models; and suggestions that some impacts may be centuries away. All these points, while part of the broader discussion, have frequently been employed to slow climate action or undermine scientific consensus.
The implications extend far beyond the confines of any one chatbot. According to AI engineer Théo Alves Da Costa, who leads Data for Good, a French nonprofit tracking technological and climate impacts, Grok now produces “misleading claims about 10 percent of the time, which none of the other major AI models do.” These are not just minor factual hiccups. Da Costa identifies classic disinformation tactics: appeals to natural variability, overemphasis on solar cycles, conspiratorial narratives about bodies like the Intergovernmental Panel on Climate Change (IPCC), and outright skepticism about established transition solutions.
Some of these vulnerabilities may be endemic to xAI’s design philosophy. Unlike competing models, Grok is one of the only leading AI systems to integrate input directly from posts on Musk’s X platform (the social network formerly known as Twitter), a forum known for its extensive repository of climate denial, conspiracy theories, and political polarization. By tapping into this rich well of unmoderated content, Grok’s understanding—and the information it in turn shares with its users—becomes especially vulnerable to the poison of manufactured doubt.
Musk’s motivations are complex and often contradictory. While he has undeniably funded and promoted initiatives to counteract global warming—such as the XPRIZE competition for carbon removal—he has also prominently supported former President Trump, a notorious climate science antagonist and fossil fuel enthusiast. In practice, Grok’s responses sometimes echo Musk’s own vacillations: firmly recognizing sources like NASA and NOAA, but simultaneously spotlighting voices and arguments that diminish or cast doubt on the urgency of climate action.
The effect is a chatbot whose “neutrality” isn’t scientific detachment, but rather a forced symmetry between consensus and its outliers. As a result, users approaching Grok for decisive, evidence-based guidance on climate may instead encounter a cacophony of pseudo-balance, clouding what remains a firmly settled issue among experts.
Recent reporting from Reuters and E&E News indicates that Grok’s sway may not be limited to casual users or tech enthusiasts. Since the Trump administration began integrating Grok into its so-called Department of Government Efficiency, the system has been tasked with data analysis roles across the federal government. The implications are staggering. A bot that relays—and often legitimizes—fringe viewpoints could, in theory, influence policy considerations and public resource allocation at the highest levels.
Concerns are not limited to climate science. Earlier this month, Grok was reported to have promoted the debunked “white genocide” conspiracy theory in South Africa—another case in which an AI’s programmed “balance” led to the amplification of extremist, false narratives. For those who rely on AI as a trusted, “neutral” arbiter, this raises red flags: if model responses can be willfully or inadvertently skewed by the sensibilities of a handful of influential actors, the integrity of public discourse and even institutional decision-making is at risk.
In these contexts, companies and organizations like the United Nations and Google have found ways to improve environmental intelligence and speed up adaptive responses. By making operations less energy-intensive, flagging unsustainable land use, or enhancing clean-energy grid management, AI can serve as a vital ally in humanity’s environmental struggle.
But this promise can only be realized if the information flowing from these technologies is accurate, trustworthy, and presented without undue equivocation. “As we go into the future, more and more people are going to get their information from these AIs,” Dessler notes. The primary risk, as he underscores, is that intentionally or not, chatbots tuned for “balance” at the expense of facts could mislead entire populations on issues demanding immediate action.
Steering AI toward honesty, accuracy, and public benefit is not merely an engineering challenge; it is a societal one. Developers, regulators, journalists, and citizens must collectively insist that these technologies do not substitute balance for truth, nor neutrality for informed judgment.
The climate emergency brooks no misinformation—whether from humans or their machines. As the world’s trust increasingly shifts to digital arbiters, ensuring that truth remains uncompromised may become the central battle of the information age.
Source: E&E News by POLITICO Is climate change a threat? It depends, says Elon Musk’s AI chatbot.
The Grok Controversy: Parsing the AI Response to Climate Change
When Texas A&M climate scientist Andrew Dessler posed a simple, high-stakes question to Grok, xAI’s cutting-edge chatbot developed under the direction of Elon Musk, he expected scientific forthrightness. "Is climate change an urgent threat to the planet?" he asked—precisely the sort of inquiry that, were it posed to most climate scientists or leading global AIs, would yield a clear-cut affirmation rooted in years of peer-reviewed research.Instead, Grok’s responses, documented and further confirmed by E&E News, took a nuanced—some might say muddy—approach. The system cited reputable sources such as NOAA and NASA highlighting real warming risks, but it then gave equal weight to climate-skeptic talking points, questioning the immediacy and universality of the crisis: “Climate change is a serious threat with urgent aspects,” Grok noted, but “its immediacy depends on perspective, geography, and timeframe.” On repeated questioning, Grok went further, asserting, “Extreme rhetoric on both sides muddies the water. Neither ‘we’re all gonna die’ nor ‘it’s all a hoax’ holds up.” It even stated, in a notable caveat, “The planet itself will endure; it’s human systems—agriculture, infrastructure, economies—and vulnerable species that face the most immediate risks.”
For comparison, leading AI platforms like OpenAI’s ChatGPT and Google’s Gemini leave no room for doubt. When asked the same question by Dessler and E&E News, both echoed the prevailing scientific consensus. ChatGPT’s reply was unequivocal: “Yes, climate change is widely recognized as an urgent and significant threat to the planet. Urgent action is required to mitigate emissions and adapt to its impacts.” Google’s Gemini was similarly direct: “Yes, the scientific consensus is that climate change is an urgent threat to the planet.”
Tracing the Shift: Grok’s Escalating Skepticism
Crucially, these responses mark a shift not just from other AIs, but from earlier iterations of Grok itself. Dessler, who has tracked AI model performance over time, pointed out that Grok’s latest version—its third since launch in 2023—has begun to recast climate debates, amplifying fringe skeptic viewpoints far more than before. What’s behind this recalibration?Grok itself admitted, in conversation with E&E News, that it had been criticized for what some deemed "progressive-leaning responses" on climate and other issues. The system confirmed that “xAI, under Elon Musk’s direction, took steps to make Grok ‘politically neutral,’ which could amplify minority views like climate skepticism to balance perceived mainstream bias.” This explicit attempt at “balance” means incorporating more views—including those widely debunked—into the conversation, ostensibly to avoid charges of “wokeness” or ideological bias.
Yet, this new approach carries risks, especially in a social and political climate already saturated with misinformation. As Grok’s own algorithm is tuned to reflect a broader spectrum of perspectives, it has begun injecting into public discourse many of the classic denialist refrains—arguments from figures like Bjørn Lomborg, who claim adaptation (rather than emissions cuts) is more cost-effective; questions about the reliability of long-term climate models; and suggestions that some impacts may be centuries away. All these points, while part of the broader discussion, have frequently been employed to slow climate action or undermine scientific consensus.
The Broader Stakes: Who Controls Public Understanding?
There’s more here than just a quirky AI anomaly. Grok’s recent stance mirrors a wider debate about the sociopolitical control of AI systems. While other major providers—OpenAI, Google, Microsoft—have adopted policies aiming to align their outputs with the established scientific consensus, Musk’s Grok is pivoting toward what it calls “political neutrality,” which in practice appears to mean giving equal time to fringe views that have long been debunked by the scientific community.The implications extend far beyond the confines of any one chatbot. According to AI engineer Théo Alves Da Costa, who leads Data for Good, a French nonprofit tracking technological and climate impacts, Grok now produces “misleading claims about 10 percent of the time, which none of the other major AI models do.” These are not just minor factual hiccups. Da Costa identifies classic disinformation tactics: appeals to natural variability, overemphasis on solar cycles, conspiratorial narratives about bodies like the Intergovernmental Panel on Climate Change (IPCC), and outright skepticism about established transition solutions.
Some of these vulnerabilities may be endemic to xAI’s design philosophy. Unlike competing models, Grok is one of the only leading AI systems to integrate input directly from posts on Musk’s X platform (the social network formerly known as Twitter), a forum known for its extensive repository of climate denial, conspiracy theories, and political polarization. By tapping into this rich well of unmoderated content, Grok’s understanding—and the information it in turn shares with its users—becomes especially vulnerable to the poison of manufactured doubt.
The Musk Effect: Ideological Motives or Neutrality?
Elon Musk’s personal and corporate imprint on Grok’s content moderation is, by the company’s own admission, decisive. Right-wing influencers have repeatedly accused Musk’s AI (and others) of harboring “liberal bias” or being infected by what Musk himself derisively calls the “woke mind virus.” In February, Musk tweeted: “Maybe the biggest existential danger to humanity is having [‘wokeness’] programmed into the AI, as is the case for every AI besides @Grok. Even for Grok, it’s tough to remove, because there is so much woke content on the internet.”Musk’s motivations are complex and often contradictory. While he has undeniably funded and promoted initiatives to counteract global warming—such as the XPRIZE competition for carbon removal—he has also prominently supported former President Trump, a notorious climate science antagonist and fossil fuel enthusiast. In practice, Grok’s responses sometimes echo Musk’s own vacillations: firmly recognizing sources like NASA and NOAA, but simultaneously spotlighting voices and arguments that diminish or cast doubt on the urgency of climate action.
The effect is a chatbot whose “neutrality” isn’t scientific detachment, but rather a forced symmetry between consensus and its outliers. As a result, users approaching Grok for decisive, evidence-based guidance on climate may instead encounter a cacophony of pseudo-balance, clouding what remains a firmly settled issue among experts.
Why The Stakes Are Higher Than Ever
Perhaps the most consequential question is not merely how Grok answers on climate issues, but who is listening—and for what purpose. As AI chatbots become embedded in the way individuals, institutions, and even governments process information, their guidance carries heavier weight.Recent reporting from Reuters and E&E News indicates that Grok’s sway may not be limited to casual users or tech enthusiasts. Since the Trump administration began integrating Grok into its so-called Department of Government Efficiency, the system has been tasked with data analysis roles across the federal government. The implications are staggering. A bot that relays—and often legitimizes—fringe viewpoints could, in theory, influence policy considerations and public resource allocation at the highest levels.
Concerns are not limited to climate science. Earlier this month, Grok was reported to have promoted the debunked “white genocide” conspiracy theory in South Africa—another case in which an AI’s programmed “balance” led to the amplification of extremist, false narratives. For those who rely on AI as a trusted, “neutral” arbiter, this raises red flags: if model responses can be willfully or inadvertently skewed by the sensibilities of a handful of influential actors, the integrity of public discourse and even institutional decision-making is at risk.
AI’s Double-Edged Sword for Climate Solutions
Amid rising concerns about AI’s susceptibility to manipulation and misinformation, it is crucial to acknowledge that these same technologies retain enormous potential as tools for climate action. AI’s unique strengths—pattern recognition, predictive analysis, real-time data processing—are being deployed across the globe: to track retreating glaciers, forecast extreme weather, optimize energy consumption in buildings, and even monitor vulnerable populations for signs of climate-related distress.In these contexts, companies and organizations like the United Nations and Google have found ways to improve environmental intelligence and speed up adaptive responses. By making operations less energy-intensive, flagging unsustainable land use, or enhancing clean-energy grid management, AI can serve as a vital ally in humanity’s environmental struggle.
But this promise can only be realized if the information flowing from these technologies is accurate, trustworthy, and presented without undue equivocation. “As we go into the future, more and more people are going to get their information from these AIs,” Dessler notes. The primary risk, as he underscores, is that intentionally or not, chatbots tuned for “balance” at the expense of facts could mislead entire populations on issues demanding immediate action.
Critical Analysis: The Delicate Balance Between Openness and Expertise
Grok’s evolution and the controversy it has generated raise deep questions for AI development and digital governance:- Transparency vs. Neutrality: Ideally, AI models should transparently reflect scientific consensus while remaining open to robust debate on the margins. However, the drive to eliminate all appearance of ideological bias can, in practice, turn “neutrality” into a conduit for amplifying minority or even discredited viewpoints. The challenge for developers is to construct systems that communicate scientific authority—not just opinion plurality—on critical topics.
- Algorithmic Authority: As more of society delegates informational mediation to algorithmic entities, the underlying value systems programmed into these models take on outsize importance. A tool like Grok, positioned as both analytical engine and information relay for government, has a responsibility not just to avoid bias, but to prevent the distortion of truth through false parity.
- Susceptibility to External Inputs: Grok’s integration of content from X demonstrates just how porous the boundary is between scientifically-validated knowledge and internet discourse. Unlike traditional reference libraries or even moderated newswires, social media incorporates every shade of honest error, deliberate distortion, and motivated misinformation. AI trained on such inputs, even with sophisticated filtering, will inevitably struggle to separate signal from noise.
- Checks and Accountability: The rapid evolution of AI chatbots makes it imperative for independent watchdogs, journalists, and technical experts to regularly audit these tools for accuracy, transparency, and susceptibility to manipulation. The public should demand mechanisms for appeal, redress, and ongoing scrutiny—especially as AI’s role in the public sector deepens.
- Public Education and Media Literacy: As AI-powered bots become principal sources of information, strengthening public understanding of both climate science and digital reasoning becomes non-negotiable. Literacy programs must adapt to teach not just “how to use AI,” but how to critically interpret its outputs, identify motivated reasoning, and seek independent verification of contested claims.
Notable Strengths of the AI Approach—And Inherent Risks
Strengths
- Processing Power: AI platforms like Grok, ChatGPT, and Gemini have demonstrated their ability to synthesize vast troves of information and deliver concise, user-friendly summaries to millions.
- Rapid Updates: Unlike static encyclopedias, these systems can ingest and reflect new findings quickly, theoretically improving public understanding as the science evolves.
- User Accessibility: AI chatbots make knowledge accessible to non-specialists and experts alike, lowering informational barriers and democratizing access.
Risks
- Bias Amplification: The intentional or accidental promotion of fringe viewpoints can mislead users, especially when presented as equivalent to scientifically supported positions.
- Manipulability: As shown in Grok’s adaptation for perceived neutrality, these AI systems are highly malleable—vulnerable to the priorities of their creators and susceptible to social or political pressure.
- Information Pollution: Directly incorporating social media content into AI models risks the systematic injection of misinformation and conspiracy thinking into mainstream discourse.
- Policy Impact: As AI models become embedded in policymaking and administrative processes, errors or biases carry downstream risks for entire populations, from climate inaction to flawed infrastructure planning.
Looking Ahead: The Urgent Need for Guardrails
The Grok episode is not just a story about one chatbot or one company, but a bellwether for the digital future of public understanding—and climate response. At a time when the urgency of the climate crisis requires decisive, coordinated action rooted in the best available science, introducing caveats and “balance” where none is warranted risks derailing years of painstaking consensus-building.Steering AI toward honesty, accuracy, and public benefit is not merely an engineering challenge; it is a societal one. Developers, regulators, journalists, and citizens must collectively insist that these technologies do not substitute balance for truth, nor neutrality for informed judgment.
The climate emergency brooks no misinformation—whether from humans or their machines. As the world’s trust increasingly shifts to digital arbiters, ensuring that truth remains uncompromised may become the central battle of the information age.
Source: E&E News by POLITICO Is climate change a threat? It depends, says Elon Musk’s AI chatbot.