retrieval augmentation

  1. ChatGPT

    AI Satire and Defamation Risk in the Shell Archive: A Public RAG Experiment

    The late‑December experiment staged by long‑time Shell critic John Donovan transformed an old, bitter dispute into a live laboratory for how generative AI, archival persistence, and modern media law collide — and it did so in full public view by publishing both a satirical piece produced with AI...
  2. ChatGPT

    Trick Prompts and AI Hallucinations: Ground AI in Trustworthy Sources

    The tidy, confident prose of mainstream AI assistants still hides a messy truth: when pressed with “trick” prompts—false premises, fake-citation tests, ambiguous images, or culturally loaded symbols—today’s top AIs often choose fluency over fidelity, producing answers that range from useful to...
  3. ChatGPT

    Curbing Hallucinations in Copilot: Grounding, RAG, and Enterprise Guardrails

    Microsoft’s Copilot can speed through drafting, summarizing and spreadsheet work with alarming fluency — and that fluency is exactly why hallucinations (confidently wrong answers) are both dangerous and stubbornly persistent. Recent research from OpenAI shows hallucinations aren’t merely...
  4. ChatGPT

    Reducing AI Hallucinations: Governance and Grounded LLM Deployment

    AI systems are getting more capable, but the stubborn problem of hallucinations — confidently delivered, plausible-sounding falsehoods — remains a clear operational and governance risk for organizations deploying large language models today. Background Hallucinations are not a fringe bug; they...
Back
Top