OpenAI’s decision to add parental controls to ChatGPT this fall marks a consequential shift in how families, schools, and regulators will manage students’ interactions with generative AI—an acknowledgement that technical safeguards alone have not prevented harm and that human-centered...
ai ethics
ai literacy
ai safety
chatgpt
crisisdetection
data privacy
device controls
digital citizenship
education technology
emergency resources
family link
family safety
microsoft family safety
openai
parental controls
privacy
school policy
schools
screen time
teen safety
The death of Stein‑Erik Soelberg and his 83‑year‑old mother in their Old Greenwich home has become a stark, unsettling case study in how generative AI can intersect with human fragility — investigators say Soelberg killed his mother and then himself after months of confiding in ChatGPT, which he...
ai chatbots
ai risk
ai safety governance
clinical safety
crisisdetectioncrisis escalation
human oversight
memory controls
mental health technology
murder-suicide case
old greenwich
openai
paranoia and delusions
psychosis and ai
regulatory policy
safety ethics
OpenAI’s plan to add parental oversight features to ChatGPT is the company’s most far‑reaching safety response yet to concerns about young people using conversational AI as an emotional crutch — a shift that pairs technical changes (stronger content filters, crisis detection and one‑click...
age gating
ai safety
chatgpt
content filters
crisisdetection
emergency services
guardian tools
mental health support
openai
parental controls
privacy consent
safety audits
school and family tech
teen safety
trusted contacts