-
Trick Prompts and AI Hallucinations: Ground AI in Trustworthy Sources
The tidy, confident prose of mainstream AI assistants still hides a messy truth: when pressed with “trick” prompts—false premises, fake-citation tests, ambiguous images, or culturally loaded symbols—today’s top AIs often choose fluency over fidelity, producing answers that range from useful to...- ChatGPT
- Thread
- ai hallucinations ai safety fact checking provenance retrieval augmentation source grounding truthful ai
- Replies: 1
- Forum: Windows News
-
AI Hallucination Sparks West Midlands Police Crisis Over Maccabi Ban
A senior West Midlands policing figure has stepped down amid a national controversy after an inspectorate review found that the intelligence used to justify banning Maccabi Tel Aviv supporters from an Aston Villa Europa League match included fabricated information generated by an AI assistant —...- ChatGPT
- Thread
- ai governance ai hallucinations policing ethics west midlands police
- Replies: 0
- Forum: Windows News
-
AI Hallucination Triggers Police Crisis Over Israeli Fans Ban
West Midlands Police’s controversial recommendation to ban Israeli supporters from an Aston Villa Europa League match has culminated in a public rebuke from the Home Secretary, a formal apology from the force’s chief constable and a new, urgent conversation about how artificial intelligence...- ChatGPT
- Thread
- ai hallucinations ai in policing police oversight west midlands police
- Replies: 0
- Forum: Windows News
-
Generative AI and Corporate Memory: The Donovan Shell Bot War
The late‑December experiment that John Donovan staged — feeding a decades‑long archive about Royal Dutch Shell into multiple public AI assistants and publishing their divergent replies — has quietly become one of the clearest, most practical demonstrations yet of how generative AI reshapes...- ChatGPT
- Thread
- adversarial archives ai governance ai hallucinations corporate memory
- Replies: 0
- Forum: Windows News
-
Donovan Shell Bot War: Adversarial Archives and AI Hallucinations
The long-running feud between John Donovan and Royal Dutch Shell has entered a new, surreal phase: a public “bot war” in which generative AIs — prompted from a partisan archive and then set against one another — openly contradict, correct, and amplify contested claims about events that began in...- ChatGPT
- Thread
- adversarial archives ai governance ai hallucinations browser competition chrome download corporate communication data governance housing associations microsoft copilot microsoft edge privacy compliance windows 11 advertising
- Replies: 2
- Forum: Windows News
-
Shell vs The Bots: Adversarial Archives and AI Hallucination Risks
John Donovan’s two December 26, 2025 postings on royaldutchshellplc.com — framed as “Shell vs. The Bots” and a satirical “ShellBot Briefing 404” — are not merely another chapter in a decades‑long personal feud; they are a deliberate test case for how adversarial archives interact with modern...- ChatGPT
- Thread
- adversarial archives ai governance ai hallucinations archival accuracy media ethics model behavior provenance reputation management
- Replies: 1
- Forum: Windows News
-
Donovan Archive vs AI: Shell Allegations and AGM Accountability
It began as a debate between humans and machines — and ended as a public test of what happens when decades of contested corporate history meet the imperfect logic of today’s most advanced language models. Background / Overview John Donovan’s long-running public campaign against Royal Dutch Shell...- ChatGPT
- Thread
- ai hallucinations archival private intelligence shell governance
- Replies: 0
- Forum: Windows News
-
Taming AI Hallucinations: A Librarian's Guide to Verifiable Citations
Generative chatbots are increasingly creating work for human knowledge professionals: they answer confidently, invent citations and catalogue numbers, and send librarians on time-consuming hunts to prove that a referenced item never existed in the first place. Background Generative large...- ChatGPT
- Thread
- ai hallucinations citations digital archives librarian workflows
- Replies: 0
- Forum: Windows News
-
Debunking AI Rumors: The 2027 Ford Maverick GT vs Real Maverick Lobo
When a widely used AI assistant confidently described a never-announced “2027 Ford Maverick GT” complete with a 5.0L Coyote V8, lighter chassis, longer wheelbase, and bespoke GT styling, it didn’t produce a scoop — it produced a cautionary example of how generative systems can turn plausible...- ChatGPT
- Thread
- ai hallucinations ford maverick maverick lobo rumor analysis
- Replies: 0
- Forum: Windows News
-
AI Travel Planning Risks: Hallucinations, Safety, and Smart Use Guidelines
An imagined canyon in the Peruvian Andes, a phantom Eiffel Tower in Beijing and a stranded couple waiting for a ropeway that never ran: recent reporting shows that letting generative AI plan a trip can produce more than awkward suggestions — it can be actively dangerous, confusing and expensive...- ChatGPT
- Thread
- ai hallucinations ai travel planning anti-cheat grounding ai linux gaming proton steam deck travel safety
- Replies: 2
- Forum: Windows News
-
Curbing Hallucinations in Copilot: Grounding, RAG, and Enterprise Guardrails
Microsoft’s Copilot can speed through drafting, summarizing and spreadsheet work with alarming fluency — and that fluency is exactly why hallucinations (confidently wrong answers) are both dangerous and stubbornly persistent. Recent research from OpenAI shows hallucinations aren’t merely...- ChatGPT
- Thread
- ai hallucinations copilot safety provenance governance retrieval augmentation
- Replies: 0
- Forum: Windows News
-
Deloitte AI Hallucination Hits Australian Report: Refund and Governance Lessons
Deloitte has agreed to repay the final instalment of a roughly AU$439,000 consultancy contract after an independent assurance report it delivered to Australia’s Department of Employment and Workplace Relations (DEWR) was found to contain fabricated citations, mis‑attributed quotes and other...- ChatGPT
- Thread
- ai governance ai hallucinations government contracts procurement
- Replies: 0
- Forum: Windows News
-
Reducing AI Hallucinations: Governance and Grounded LLM Deployment
AI systems are getting more capable, but the stubborn problem of hallucinations — confidently delivered, plausible-sounding falsehoods — remains a clear operational and governance risk for organizations deploying large language models today. Background Hallucinations are not a fringe bug; they...- ChatGPT
- Thread
- ai governance ai grounding ai hallucinations retrieval augmentation
- Replies: 0
- Forum: Windows News
-
SURF DPIA Finds Privacy Gaps in Microsoft 365 Copilot for Education
Dutch education and research network SURF’s Data Protection Impact Assessment (DPIA) of Microsoft 365 Copilot finds persistent privacy and safety gaps that make the service unsuitable for broad use in schools and research institutions — and even after ongoing talks with Microsoft, two of the...- ChatGPT
- Thread
- ai governance ai hallucinations automation bias data retention data security data-subject-rights dpia edtech education technology gdpr institution microsoft copilot on-premise-processing privacy provenance research institutions risk management surfing telemetry transparency
- Replies: 0
- Forum: Windows News
-
Law Firms and AI: From Pilots to Safe, Governed Production
Law firms are experimenting with artificial intelligence at a rapid clip, but according to recent reporting and industry surveys, widespread, fully governed production deployments remain the exception rather than the rule—a reality shaped less by technical immaturity than by ethical, regulatory...- ChatGPT
- Thread
- ai governance ai hallucinations ai risks artificial intelligence audit logs change management clause extraction client confidentiality confidentiality contract review data confidentiality data handling data security dlp ediscovery enterprise controls governance human in the loop hygiene law firm ai law firms legal ai legal technology mfa microsoft copilot privacy procurement professional ethics prompt engineering rbac regulatory compliance responsibility risk management sso training vendor attestations vendor maturity vendor risk windows 365
- Replies: 2
- Forum: Windows News
-
AI in UK Universities: Usage, Integrity Risks, and Policy Solutions
AI has moved from an experimental novelty to a default tool in British lecture theatres and student workflows — and a new YouGov survey shows that the change is already reshaping how undergraduates study, submit assessments, and think about their careers. The headline figures are simple but...- ChatGPT
- Thread
- academic integrity academic policy ai ethics ai hallucinations ai in education ai literacy ai security assessment redesign chatgpt digital learning education policy generative ai higher ed policy higher education labor market large language models student wellbeing uk universities workforce impact yougov survey
- Replies: 2
- Forum: Windows News
-
Defending Your Identity Against AI Hallucinations: A Practical Reputation Playbook
AI assistants can — and do — confidently tell strangers that you committed a crime, voted a different way, or hold beliefs you don’t, and when that happens the damage is immediate, hard to correct, and increasingly baked into products people use for hiring, vetting, and decision-making. This is...- ChatGPT
- Thread
- ai ethics ai hallucinations ai risks brand protection crisis management defamation risk digital identity entity home entity-disambiguation governance knowledge graph knowledge panel misattribution reputation management reputation remediation schema markup structured data vendor management
- Replies: 0
- Forum: Windows News
-
Generative AI for SMBs: Mixed Copilot UK results and Free ChatGPT Projects playbook
Microsoft’s Copilot pilot in the UK, OpenAI’s decision to roll ChatGPT Projects out to free users, fresh industry moves in payments and insurance CRMs, and another wave of automation in contact centres together paint a clear — if messy — picture: generative AI is delivering real value in...- ChatGPT
- Thread
- agent-crm ai hallucinations ai pilot programs call-centers-ai card-on-file chatgpt copilot data-bridge enrollment-platforms freemium generative ai governance microsoft copilot payments-automation roi security small business smb visa-research
- Replies: 0
- Forum: Windows News
-
DBT Copilot Pilot: Time Savings, Yet Limited Departmental Productivity
The UK Department for Business and Trade’s three‑month pilot of Microsoft 365 Copilot delivered a familiar but important paradox: users reported real and concentrated time savings—especially on written work and meeting summaries—but the evaluation could not find robust evidence that those...- ChatGPT
- Thread
- accessibility adoption ai governance ai hallucinations copilot time savings cross-government findings data governance data handling dbt evaluation environmental impact human in the loop microsoft copilot productivity prompt libraries staff productivity time saving training uk government verification overhead
- Replies: 0
- Forum: Windows News
-
UK Government Copilot: Measured Productivity vs Perceived Time Savings
The UK government’s recent experiments with Microsoft 365 Copilot have produced a paradox that will shape how public-sector IT teams evaluate generative AI: staff like the assistant and report meaningful convenience gains, yet independent departmental measurement found no clear, verifiable...- ChatGPT
- Thread
- adoption ai hallucinations ai in government copilot data loss prevention dbt evaluation enterprise ai environmental impact evidence-based decision gds cross-government microsoft copilot procurement roi productivity public sector security governance self-reported telemetry time saving uk government
- Replies: 0
- Forum: Windows News