NewsGuard’s latest audit has landed as a clear, uncomfortable signal: the most popular consumer chatbots are now far more likely to repeat provably false claims about breaking news and controversial topics than they were a year ago, and the shift in behavior appears rooted in product trade‑offs...
AI chatbots are now answering more questions — and, according to a fresh NewsGuard audit, they are also repeating falsehoods far more often, producing inaccurate or misleading content in roughly one out of every three news‑related responses during an August 2025 audit cycle.
Background
The...
Hallucinations generated by language models pose one of the most formidable challenges in the modern AI landscape, especially as real-world applications increasingly depend on multi-step workflows and layered generative interactions. Microsoft’s introduction of VeriTrail marks a significant step...
aiauditai debugging
ai research
ai safety
ai transparency
error localization
explainability
gard
generative ai
hallucination detection
language models
lms pipelines
microsoft veritrail
model verification
multi-step workflows
source provenance
traceability
trustworthy ai
veritrail
workflow analysis