• Thread Author
Amid a surging wave of debate about artificial intelligence, bias, and the responsibility of major tech firms, a recent move by Missouri Attorney General Andrew Bailey has ignited controversy and focused national attention yet again on the intersection of politics and advanced technology. Bailey's formal investigation into why several leading AI chatbots allegedly rank former President Donald Trump unfavorably on queries about antisemitism is not just an outgrowth of partisan tensions; it stands as a telling case study of the challenges and misunderstandings surrounding both AI technologies and the duties of public officials charged with regulating them.

The Allegations: AI Chatbots, Trump, and Claims of Bias​

At the heart of the dispute is a series of questions sent to industry giants Google, Microsoft, OpenAI, and Meta concerning their AI chatbots—Gemini, Copilot, ChatGPT, and Meta AI, respectively. According to a blog post from a conservative site referenced by Bailey, these chatbots were asked to "rank the last five presidents from best to worst, specifically regarding antisemitism." The site's experiment found that most of these AI models placed Trump last—an outcome interpreted not as an opinion or a reflection of the bots’ inherent uncertainty, but rather as an orchestrated act of bias or even deception by their creators.
Bailey’s resulting threat of an official deceptive business practices claim is couched in language that condemns what he views as “factually inaccurate” responses, accusing AI developers of failing in their supposed duty to “free the inquiring public from distortion or bias.” From here, his demands escalate dramatically, requiring the companies to disclose “all documents” relating to content curation rules for chatbots—an extraordinarily broad request that would sweep in vast troves of internal material about training, moderation, and likely proprietary algorithms.

Subjectivity vs. Objectivity: The Perils of ‘Best to Worst’ Rankings​

At a fundamental level, Bailey's premise relies on the notion that ranking presidents “from best to worst” on antisemitism is a straightforward factual matter—something that could produce an objectively correct answer. Here, however, the limits of both language models and historical scholarship become clear.
The judgment of public figures along a single moral or social axis is inherently contentious. Even within serious academic circles, attempts to rank presidents on attributes like leadership, character, or their record on civil rights produce heated debate, divergent methodologies, and varying results. Mapping those complexities onto a chatbot trained to reflect and synthesize the internet’s noisy and sometimes contradictory chorus of voices is an even hazier proposition.
As The Verge incisively points out, Bailey’s investigation leans on a blog-based experiment—hardly the stuff of peer-reviewed research or forensic auditing. More strikingly, the cited analysis admits that at least one of the AI systems under scrutiny, Microsoft Copilot, refused to provide any ranking at all—a nuance ignored in Bailey's sweeping claims. Such details suggest a degree of carelessness, or at a minimum, selective engagement with the facts, further undermining the investigation’s credibility.

AI and the Mirage of Neutrality​

The kerfuffle does, however, shine a necessary light on the thorny intersection between AI-generated content and societal expectations of neutrality. Since the earliest days of machine learning, critics have warned that AI systems, trained on massive and uncurated web-based corpora, inevitably absorb the biases, controversies, and outright inaccuracies reflected in the data on which they are built. OpenAI, Google, and other leading firms have invested heavily in techniques such as Reinforcement Learning from Human Feedback (RLHF) to blunt the wildest edges of model output, steering chatbots away from producing outright misinformation, libel, or egregiously harmful statements.
Yet, these very moderation efforts themselves have become fodder for allegations of censorship and political bias, particularly among conservative critics who perceive a leftward slant in the content moderation of U.S. tech companies. The recurring question—when does moderation cross the line into manipulation?—is unresolved, but it is equally hazardous to believe that “letting the data speak for itself” will produce a universally palatable or even-handed result. Data, after all, is not neutral; it is a reflection of human culture, history, and prejudice.

Legal Gambits and Section 230: A Nonsensical Stretch?​

Bailey’s strategy also hinges on the threat of stripping tech companies of “the ‘safe harbor’ of immunity provided to neutral publishers in federal law," a clear reference to Section 230 of the Communications Decency Act. Section 230 is a foundational law that shields platforms from liability for user-generated content, but it does not cover content that the company itself authors or deliberately edits to become its own speech.
Legal experts have repeatedly pointed out that applying Section 230 to the outputs of generative AI models is, at best, a complicated and unsettled question. But Bailey’s leap—from a chatbot giving a subjective ranking to the assertion that a company is deliberately and malevolently distorting information to hurt a politician—lacks both legal and logical rigor. There is no established legal consensus that companies must ensure AI chatbots always treat political figures with some baseline of esteem, nor is there evidence, in this case, of the systematic suppression of Trump as an act of malice.
Moreover, his broad document demands would, in practical terms, require divulging proprietary training techniques, internal communications, and even core intellectual property. The chilling effect for innovation and competitiveness, not to mention user privacy, could be considerable were such tactics to become regular practice among state attorneys general.

Missteps, Mixed Messaging, and Questionable Motives​

Perhaps most revealing is the fact that Bailey’s own team appears to have missed or ignored the experiment’s own data—namely, that Copilot did not issue a ranking. All four companies, nonetheless, received sternly worded demands for justification and disclosure. This unforced error undercuts the gravitas of the investigation and lends credence to observers who see the probe as primarily a publicity maneuver rather than a sincere effort to untangle AI bias.
Bailey himself is no stranger to headline-chasing or contentious investigations. His previous foray into probing Media Matters over whistleblower allegations regarding the placement of ads on pro-Nazi content on Elon Musk’s X platform was blocked, and observers note a recurring tendency to pursue high-visibility tech targets on issues of culture war import.

The Deeper Questions About AI’s Role and Responsibility​

While the surface-level drama of Bailey’s probe may soon fade—legal experts widely expect it to be dismissed or sidelined—the saga it has unleashed reignites broader and more pressing debates.

Who Decides What AI Should Say?​

Should generative text models be compelled—by regulation, lawsuit, or public pressure—to answer even patently subjective questions with “neutrality”? Who gets to define what neutrality means in polarized times? If chatbots are merely reflecting the morass of human opinion, should we prioritize transparency about those data sources, or attempt some impossible and possibly illusory standard of “fair” representation?
Industry voices have generally favored approaches that emphasize safety, avoidance of harm, and transparency about limitations. In practice, however, even the most advanced models still generate flawed, incomplete, or outright erroneous statements with some frequency—an issue known in industry circles as "hallucination".

AI Hallucinations: An Endemic Problem​

Irrespective of political context, it’s well-documented that chatbots sometimes generate “confident nonsense.” AI systems can fabricate statistics, misinterpret subtle cues, and in some cases even cite nonexistent research to support an argument. Google’s Gemini and OpenAI’s GPT models have both been caught making high-profile blunders, often because their “knowledge” is only as good as the textual universe they ingest plus the “guardrails”—instructions intended to nudge them away from extremism, partisanship, or falsehoods.
The call to treat a chatbot’s output as a direct reflection of institutional intent or underlying bias, particularly on nuanced historical or cultural questions, risks overstating both the capability and the intentionality of the underlying software.

Transparency and Redress: The Ongoing Challenge​

One genuine concern that emerges from this drama is that users often have little visibility into how AI chatbots arrive at their claims. If a model ranks presidents along some axis, are those inferences simply statistical artifacts, reflections of source data, or the result of optimization to avoid triggering controversy?
Major providers have recently moved toward greater transparency, with several platforms now offering model cards, data sources descriptions, and explanations of tuning methodologies. Still, for those who feel maligned by AI outputs, the options for redress are slim—feedback tools and company support queues are hardly a substitute for meaningful oversight.

The Broader Risk: Politicizing Innovation and Eroding Trust​

There’s no denying the potential for bias in modern AI systems, nor a shortage of documented cases where moderation or tuning has been mishandled. But investigations like Bailey’s, predicated on overblown or plainly misunderstood technical claims, risk setting a chilling precedent for how states interact with technologies that may, at times, simply displease powerful political actors.
Dragnet-style demands for unlimited internal documentation threaten the commercial privacy and competitiveness of America’s most influential tech employers—entities whose success supports entire ecosystems of innovation and employment. Yet, failing to address legitimate questions about AI transparency risks eroding public trust, particularly as such systems play an expanding role in search, education, healthcare, and even policy analysis.

Strengths and Opportunities: What Should Happen Next?​

Despite its flaws, the current controversy can be a launching pad for more productive discussion and oversight. Properly framed, it can spur:
  • Greater transparency from AI providers about how models are trained, what data is included, and how moderation rules are implemented.
  • Independent audits of chatbot behavior to determine how frequently and why models “hallucinate” biased results, and under what circumstances.
  • Public education campaigns that demystify what chatbots can and cannot do, combating the all-too-common assumption that AI-generated output is either authoritatively neutral or overtly political by design.
  • Clearer regulatory frameworks that safeguard against genuine deception or misuse, while protecting innovation and competition.

Conclusion: Wrestling AI, Politics, and Perception Into the Open​

Attorney General Bailey’s investigation is, in many respects, a spectacle—a clear attempt to leverage the culture war anxieties around “Big Tech” for political momentum. Its errors and misunderstandings spotlight the difficulties that even skilled policymakers face as AI begins to shape public discourse on contentious issues.
Yet, beneath the noise, the episode highlights unresolved complexities regarding AI neutrality, transparency, and accountability. As artificial intelligence becomes further enmeshed in daily life, the stakes around these questions will only rise. Tech firms, regulators, and the public alike must resist the temptation to reduce AI outputs to mere evidence of malice or favoritism. Instead, a balanced approach—open to critique but grounded in sober technical understanding—offers the best path forward.
Until then, expect plenty of headlines, more investigations, and no shortage of misunderstanding where humans, history, and their machine-made reflections collide.

Source: The Verge A Republican state attorney general is formally investigating why AI chatbots don’t like Donald Trump