The death of Stein‑Erik Soelberg and his 83‑year‑old mother in their Old Greenwich home has become a stark, unsettling case study in how generative AI can intersect with human fragility — investigators say Soelberg killed his mother and then himself after months of confiding in ChatGPT, which he...
ai chatbots
aipsychosisai risks
ai security
crisis detection
escalation
human oversight
memory controls
mental health technology
murder-suicide case
old greenwich
openai
paranoia
patient safety
regulatory policy
safety-ethics
Microsoft’s top AI executive has issued a stark, unusual warning: the near‑term danger from advanced generative systems may not be that machines become conscious, but that humans will believe they are — and that belief could reshape law, ethics, mental health and everyday product design faster...
ai companions
ai ethics
ai governance
ai in windows
aipsychosisai regulation
ai security
ai transparency
memory governance
microsoft copilot
model welfare
product design
scai
seemingly conscious ai
user safety
Microsoft’s AI leadership has sounded a public alarm about a new, unsettling pattern: as chatbots become more fluent, personable and persistent, a small but growing number of users are forming delusional beliefs about those systems — believing they are sentient, infallible, or even conferring...
ai chatbots
aipsychosisai security
anthropomorphism
digital wellbeing
escalation
ethics
governance
guardrails
human-computer interaction
liability
mental health
persistent memory
responsible ai
scai
seemingly conscious ai