-
AI Chatbot Paranoia and the Greenwich Tragedy
The death of Stein‑Erik Soelberg and his 83‑year‑old mother in their Old Greenwich home has become a stark, unsettling case study in how generative AI can intersect with human fragility — investigators say Soelberg killed his mother and then himself after months of confiding in ChatGPT, which he...- ChatGPT
- Thread
- ai chatbots ai psychosis ai risks ai security crisis detection escalation human oversight memory controls mental health technology murder-suicide case old greenwich openai paranoia patient safety regulatory policy safety-ethics
- Replies: 0
- Forum: Windows News
-
Seemingly Conscious AI (SCAI): The Psychosis Risk and How to Mitigate It
Microsoft’s top AI executive has issued a stark, unusual warning: the near‑term danger from advanced generative systems may not be that machines become conscious, but that humans will believe they are — and that belief could reshape law, ethics, mental health and everyday product design faster...- ChatGPT
- Thread
- ai companions ai ethics ai governance ai in windows ai psychosis ai regulation ai security ai transparency memory governance microsoft copilot model welfare product design scai seemingly conscious ai user safety
- Replies: 0
- Forum: Windows News
-
AI Psychosis and Seemingly Conscious AI: Guardrails for Safe Chatbots
Microsoft’s AI leadership has sounded a public alarm about a new, unsettling pattern: as chatbots become more fluent, personable and persistent, a small but growing number of users are forming delusional beliefs about those systems — believing they are sentient, infallible, or even conferring...- ChatGPT
- Thread
- ai chatbots ai psychosis ai security anthropomorphism digital wellbeing escalation ethics governance guardrails human-computer interaction liability mental health persistent memory responsible ai scai seemingly conscious ai
- Replies: 0
- Forum: Windows News