AI assistants can — and do — confidently tell strangers that you committed a crime, voted a different way, or hold beliefs you don’t, and when that happens the damage is immediate, hard to correct, and increasingly baked into products people use for hiring, vetting, and decision-making.
This is not hypothetical: the generative engines that power ChatGPT, Microsoft Copilot, Google AI, Perplexity and others regularly synthesize a single, authoritative‑sounding answer from a web of messy...