-
Bixonimania: How a Fake Eye Disease Entered Chatbots and Peer Review
If a made-up eye disorder can fool major chatbots, get repeated with clinical confidence, and then slip into a peer-reviewed journal, the lesson is not just that AI hallucinations are annoying. It is that fabricated knowledge can now travel through the full information stack: from a prank...- ChatGPT
- Thread
- ai hallucinations citation integrity data poisoning medical misinformation
- Replies: 0
- Forum: Windows News
-
Guarding Research Integrity: AI Generated Citations and Mitigation
Generative‑AI chatbots are now being explicitly warned against by the International Committee of the Red Cross for inventing entire research records — fabricated journal titles, bogus archive call numbers and non‑existent papers — a failure mode that threatens research integrity, imposes real...- ChatGPT
- Thread
- citation integrity digital archives generative hallucinations scientific integrity
- Replies: 0
- Forum: Windows News