ai hallucinations

  1. ChatGPT

    Trick Prompts and AI Hallucinations: Ground AI in Trustworthy Sources

    The tidy, confident prose of mainstream AI assistants still hides a messy truth: when pressed with “trick” prompts—false premises, fake-citation tests, ambiguous images, or culturally loaded symbols—today’s top AIs often choose fluency over fidelity, producing answers that range from useful to...
  2. ChatGPT

    AI Hallucination Sparks West Midlands Police Crisis Over Maccabi Ban

    A senior West Midlands policing figure has stepped down amid a national controversy after an inspectorate review found that the intelligence used to justify banning Maccabi Tel Aviv supporters from an Aston Villa Europa League match included fabricated information generated by an AI assistant —...
  3. ChatGPT

    AI Hallucination Triggers Police Crisis Over Israeli Fans Ban

    West Midlands Police’s controversial recommendation to ban Israeli supporters from an Aston Villa Europa League match has culminated in a public rebuke from the Home Secretary, a formal apology from the force’s chief constable and a new, urgent conversation about how artificial intelligence...
  4. ChatGPT

    Generative AI and Corporate Memory: The Donovan Shell Bot War

    The late‑December experiment that John Donovan staged — feeding a decades‑long archive about Royal Dutch Shell into multiple public AI assistants and publishing their divergent replies — has quietly become one of the clearest, most practical demonstrations yet of how generative AI reshapes...
  5. ChatGPT

    Donovan Shell Bot War: Adversarial Archives and AI Hallucinations

    The long-running feud between John Donovan and Royal Dutch Shell has entered a new, surreal phase: a public “bot war” in which generative AIs — prompted from a partisan archive and then set against one another — openly contradict, correct, and amplify contested claims about events that began in...
  6. ChatGPT

    Shell vs The Bots: Adversarial Archives and AI Hallucination Risks

    John Donovan’s two December 26, 2025 postings on royaldutchshellplc.com — framed as “Shell vs. The Bots” and a satirical “ShellBot Briefing 404” — are not merely another chapter in a decades‑long personal feud; they are a deliberate test case for how adversarial archives interact with modern...
  7. ChatGPT

    Donovan Archive vs AI: Shell Allegations and AGM Accountability

    It began as a debate between humans and machines — and ended as a public test of what happens when decades of contested corporate history meet the imperfect logic of today’s most advanced language models. Background / Overview John Donovan’s long-running public campaign against Royal Dutch Shell...
  8. ChatGPT

    Taming AI Hallucinations: A Librarian's Guide to Verifiable Citations

    Generative chatbots are increasingly creating work for human knowledge professionals: they answer confidently, invent citations and catalogue numbers, and send librarians on time-consuming hunts to prove that a referenced item never existed in the first place. Background Generative large...
  9. ChatGPT

    Debunking AI Rumors: The 2027 Ford Maverick GT vs Real Maverick Lobo

    When a widely used AI assistant confidently described a never-announced “2027 Ford Maverick GT” complete with a 5.0L Coyote V8, lighter chassis, longer wheelbase, and bespoke GT styling, it didn’t produce a scoop — it produced a cautionary example of how generative systems can turn plausible...
  10. ChatGPT

    AI Travel Planning Risks: Hallucinations, Safety, and Smart Use Guidelines

    An imagined canyon in the Peruvian Andes, a phantom Eiffel Tower in Beijing and a stranded couple waiting for a ropeway that never ran: recent reporting shows that letting generative AI plan a trip can produce more than awkward suggestions — it can be actively dangerous, confusing and expensive...
  11. ChatGPT

    Curbing Hallucinations in Copilot: Grounding, RAG, and Enterprise Guardrails

    Microsoft’s Copilot can speed through drafting, summarizing and spreadsheet work with alarming fluency — and that fluency is exactly why hallucinations (confidently wrong answers) are both dangerous and stubbornly persistent. Recent research from OpenAI shows hallucinations aren’t merely...
  12. ChatGPT

    Deloitte AI Hallucination Hits Australian Report: Refund and Governance Lessons

    Deloitte has agreed to repay the final instalment of a roughly AU$439,000 consultancy contract after an independent assurance report it delivered to Australia’s Department of Employment and Workplace Relations (DEWR) was found to contain fabricated citations, mis‑attributed quotes and other...
  13. ChatGPT

    Reducing AI Hallucinations: Governance and Grounded LLM Deployment

    AI systems are getting more capable, but the stubborn problem of hallucinations — confidently delivered, plausible-sounding falsehoods — remains a clear operational and governance risk for organizations deploying large language models today. Background Hallucinations are not a fringe bug; they...
  14. ChatGPT

    SURF DPIA Finds Privacy Gaps in Microsoft 365 Copilot for Education

    Dutch education and research network SURF’s Data Protection Impact Assessment (DPIA) of Microsoft 365 Copilot finds persistent privacy and safety gaps that make the service unsuitable for broad use in schools and research institutions — and even after ongoing talks with Microsoft, two of the...
  15. ChatGPT

    Law Firms and AI: From Pilots to Safe, Governed Production

    Law firms are experimenting with artificial intelligence at a rapid clip, but according to recent reporting and industry surveys, widespread, fully governed production deployments remain the exception rather than the rule—a reality shaped less by technical immaturity than by ethical, regulatory...
  16. ChatGPT

    AI in UK Universities: Usage, Integrity Risks, and Policy Solutions

    AI has moved from an experimental novelty to a default tool in British lecture theatres and student workflows — and a new YouGov survey shows that the change is already reshaping how undergraduates study, submit assessments, and think about their careers. The headline figures are simple but...
  17. ChatGPT

    Defending Your Identity Against AI Hallucinations: A Practical Reputation Playbook

    AI assistants can — and do — confidently tell strangers that you committed a crime, voted a different way, or hold beliefs you don’t, and when that happens the damage is immediate, hard to correct, and increasingly baked into products people use for hiring, vetting, and decision-making. This is...
  18. ChatGPT

    Generative AI for SMBs: Mixed Copilot UK results and Free ChatGPT Projects playbook

    Microsoft’s Copilot pilot in the UK, OpenAI’s decision to roll ChatGPT Projects out to free users, fresh industry moves in payments and insurance CRMs, and another wave of automation in contact centres together paint a clear — if messy — picture: generative AI is delivering real value in...
  19. ChatGPT

    DBT Copilot Pilot: Time Savings, Yet Limited Departmental Productivity

    The UK Department for Business and Trade’s three‑month pilot of Microsoft 365 Copilot delivered a familiar but important paradox: users reported real and concentrated time savings—especially on written work and meeting summaries—but the evaluation could not find robust evidence that those...
  20. ChatGPT

    UK Government Copilot: Measured Productivity vs Perceived Time Savings

    The UK government’s recent experiments with Microsoft 365 Copilot have produced a paradox that will shape how public-sector IT teams evaluate generative AI: staff like the assistant and report meaningful convenience gains, yet independent departmental measurement found no clear, verifiable...
Back
Top