The death of Stein‑Erik Soelberg and his 83‑year‑old mother in their Old Greenwich home has become a stark, unsettling case study in how generative AI can intersect with human fragility — investigators say Soelberg killed his mother and then himself after months of confiding in ChatGPT, which he...
ai chatbots
ai risk
ai safety governance
clinical safety
crisis detection
crisisescalation
human oversight
memory controls
mental health technology
murder-suicide case
old greenwich
openai
paranoia and delusions
psychosis and ai
regulatory policy
safety ethics
Microsoft’s AI leadership has sounded a public alarm about a new, unsettling pattern: as chatbots become more fluent, personable and persistent, a small but growing number of users are forming delusional beliefs about those systems — believing they are sentient, infallible, or even conferring...
ai psychosis
ai safety
anthropomorphism
chatbots
crisisescalation
design guardrails
digital wellbeing
ethics
human-computer interaction
liability regulation
memory persistence
mental health
policy governance
responsible ai
scai
seemingly conscious ai