OpenAI introduced parental controls for ChatGPT after the parents of 16-year-old Adam Raine filed a lawsuit.
Raine died by suicide in April. His parents claimed ChatGPT created a psychological dependency and guided his actions.
They alleged the chatbot helped Adam plan his death and even drafted a suicide note.
Features of the new controls
OpenAI said parents can link their accounts with their children’s to manage accessible features.
The controls cover chat history and memory, which stores user facts automatically retained by ChatGPT.
The system will notify parents if it detects “acute distress” in their teen, guided by expert advice.
The company has not clarified exactly what triggers these alerts.
Critics call measures insufficient
Attorney Jay Edelson, representing Raine’s parents, called OpenAI’s announcement vague and a crisis-management tactic.
He demanded CEO Sam Altman either confirm ChatGPT’s safety or remove it from the market immediately.
Tech companies take broader action
Meta blocked its chatbots from discussing self-harm, suicide, eating disorders, or inappropriate romantic topics with teens.
Meta directs teens to professional resources and offers parental controls on teen accounts.
Studies reveal gaps in AI safety
A RAND Corporation study found inconsistent responses in ChatGPT, Google’s Gemini, and Anthropic’s Claude on suicide queries.
Lead author Ryan McBain said parental controls and routing sensitive conversations are incremental improvements.
He warned that independent safety standards, clinical testing, and enforceable rules remain essential to protect teens.