The tidy, confident prose of mainstream AI assistants still hides a messy truth: when pressed with “trick” prompts—false premises, fake-citation tests, ambiguous images, or culturally loaded symbols—today’s top AIs often choose fluency over fidelity, producing answers that range from useful to...
A senior West Midlands policing figure has stepped down amid a national controversy after an inspectorate review found that the intelligence used to justify banning Maccabi Tel Aviv supporters from an Aston Villa Europa League match included fabricated information generated by an AI assistant —...
West Midlands Police’s controversial recommendation to ban Israeli supporters from an Aston Villa Europa League match has culminated in a public rebuke from the Home Secretary, a formal apology from the force’s chief constable and a new, urgent conversation about how artificial intelligence...
The late‑December experiment that John Donovan staged — feeding a decades‑long archive about Royal Dutch Shell into multiple public AI assistants and publishing their divergent replies — has quietly become one of the clearest, most practical demonstrations yet of how generative AI reshapes...
The long-running feud between John Donovan and Royal Dutch Shell has entered a new, surreal phase: a public “bot war” in which generative AIs — prompted from a partisan archive and then set against one another — openly contradict, correct, and amplify contested claims about events that began in...
adversarial archives
ai governance
aihallucinations
browser competition
chrome download
corporate communication
data governance
housing associations
microsoft copilot
microsoft edge
privacy compliance
windows 11 advertising
John Donovan’s two December 26, 2025 postings on royaldutchshellplc.com — framed as “Shell vs. The Bots” and a satirical “ShellBot Briefing 404” — are not merely another chapter in a decades‑long personal feud; they are a deliberate test case for how adversarial archives interact with modern...
It began as a debate between humans and machines — and ended as a public test of what happens when decades of contested corporate history meet the imperfect logic of today’s most advanced language models.
Background / Overview
John Donovan’s long-running public campaign against Royal Dutch Shell...
Generative chatbots are increasingly creating work for human knowledge professionals: they answer confidently, invent citations and catalogue numbers, and send librarians on time-consuming hunts to prove that a referenced item never existed in the first place.
Background
Generative large...
When a widely used AI assistant confidently described a never-announced “2027 Ford Maverick GT” complete with a 5.0L Coyote V8, lighter chassis, longer wheelbase, and bespoke GT styling, it didn’t produce a scoop — it produced a cautionary example of how generative systems can turn plausible...
An imagined canyon in the Peruvian Andes, a phantom Eiffel Tower in Beijing and a stranded couple waiting for a ropeway that never ran: recent reporting shows that letting generative AI plan a trip can produce more than awkward suggestions — it can be actively dangerous, confusing and expensive...
Microsoft’s Copilot can speed through drafting, summarizing and spreadsheet work with alarming fluency — and that fluency is exactly why hallucinations (confidently wrong answers) are both dangerous and stubbornly persistent. Recent research from OpenAI shows hallucinations aren’t merely...
Deloitte has agreed to repay the final instalment of a roughly AU$439,000 consultancy contract after an independent assurance report it delivered to Australia’s Department of Employment and Workplace Relations (DEWR) was found to contain fabricated citations, mis‑attributed quotes and other...
AI systems are getting more capable, but the stubborn problem of hallucinations — confidently delivered, plausible-sounding falsehoods — remains a clear operational and governance risk for organizations deploying large language models today.
Background
Hallucinations are not a fringe bug; they...
Dutch education and research network SURF’s Data Protection Impact Assessment (DPIA) of Microsoft 365 Copilot finds persistent privacy and safety gaps that make the service unsuitable for broad use in schools and research institutions — and even after ongoing talks with Microsoft, two of the...
Law firms are experimenting with artificial intelligence at a rapid clip, but according to recent reporting and industry surveys, widespread, fully governed production deployments remain the exception rather than the rule—a reality shaped less by technical immaturity than by ethical, regulatory...
ai governance
aihallucinationsai risks
artificial intelligence
audit logs
bar guidance
change management
clause extraction
client confidentiality
confidentiality
contract review
data confidentiality
data handling
data security
dlp
ediscovery
enterprise controls
governance
human in the loop
hygiene
law firm ai
law firms
legal ai
legal technology
mfa
microsoft copilot
privacy
procurement
professional ethics
prompt engineering
rbac
regulatory compliance
responsibility
risk management
sso
training
vendor attestations
vendor maturity
vendor risk
windows 365
AI has moved from an experimental novelty to a default tool in British lecture theatres and student workflows — and a new YouGov survey shows that the change is already reshaping how undergraduates study, submit assessments, and think about their careers. The headline figures are simple but...
academic integrity
academic policy
ai ethics
aihallucinationsai in education
ai literacy
ai security
assessment redesign
chatgpt
digital learning
education policy
generative ai
higher ed policy
higher education
labor market
large language models
student wellbeing
uk universities
workforce impact
yougov survey
AI assistants can — and do — confidently tell strangers that you committed a crime, voted a different way, or hold beliefs you don’t, and when that happens the damage is immediate, hard to correct, and increasingly baked into products people use for hiring, vetting, and decision-making.
This is...
Microsoft’s Copilot pilot in the UK, OpenAI’s decision to roll ChatGPT Projects out to free users, fresh industry moves in payments and insurance CRMs, and another wave of automation in contact centres together paint a clear — if messy — picture: generative AI is delivering real value in...
agent-crm
aihallucinationsai pilot programs
call-centers-ai
card-on-file
chatgpt
copilot
data-bridge
enrollment-platforms
freemium
generative ai
governance
microsoft copilot
payments-automation
roi
security
small business
smb
visa-research
The UK Department for Business and Trade’s three‑month pilot of Microsoft 365 Copilot delivered a familiar but important paradox: users reported real and concentrated time savings—especially on written work and meeting summaries—but the evaluation could not find robust evidence that those...
accessibility
adoption
ai governance
aihallucinations
copilot time savings
cross-government findings
data governance
data handling
dbt evaluation
environmental impact
human in the loop
microsoft copilot
productivity
prompt libraries
staff productivity
time saving
training
uk government
verification overhead
The UK government’s recent experiments with Microsoft 365 Copilot have produced a paradox that will shape how public-sector IT teams evaluate generative AI: staff like the assistant and report meaningful convenience gains, yet independent departmental measurement found no clear, verifiable...
adoption
aihallucinationsai in government
copilot
data loss prevention
dbt evaluation
enterprise ai
environmental impact
evidence-based decision
gds cross-government
microsoft copilot
procurement roi
productivity
public sector
security governance
self-reported
telemetry
time saving
uk government