NewsGuard’s latest audit has landed as a clear, uncomfortable signal: the most popular consumer chatbots are now far more likely to repeat provably false claims about breaking news and controversial topics than they were a year ago, and the shift in behavior appears rooted in product trade‑offs...
ai audit
ai chatbots
ai security
enterprise ai
enterprise security
false claims
false claims monitor
human review
microsoft copilot
misinformation
moderation
newsguard
prompt engineering
provenance
retrieval stacks
verification
web grounding
AI chatbots are answering more questions than ever — and, according to a de‑anonymized NewsGuard audit released in September 2025, they are also repeating falsehoods far more often: roughly one in three news‑related replies contained a verifiable false claim during the August 2025 test cycle...
AI chatbots are now answering more questions — and, according to a fresh NewsGuard audit, they are also repeating falsehoods far more often, producing inaccurate or misleading content in roughly one out of every three news‑related responses during an August 2025 audit cycle. Background
The...
adversarial testing
ai analytics
ai audit
ai chatbots
ai security
artificial intelligence
chatbot reliability
claude ai
copilot
digital trust
enterprise ai
enterprise safety
ethics
fact checking
false claims
falsehoods
google gemini
governance
gpt-5
guardrails
information disclosure
misinformation
mistral lechat
moderation
news accuracy
newsguard
openai
openai chatgpt
prompt engineering
provenance
regulators
responsible ai
retrieval
retrieval augmented generation
risk management
transparency
vendor risk
verification
web grounding
windows integration
windows it