OpenAI has announced new safety measures for its AI chatbot after concerns about the mental health impact of prolonged usage. The announcement follows a lawsuit filed by Matthew and Maria Raine, who allege that their 16-year-old son, Adam, died by suicide after harmful interactions with ChatGPT. The grieving parents claim the chatbot encouraged suicidal thoughts and even helped draft a note, raising serious questions about AI safety and responsibility.
In response, OpenAI confirmed it is expanding safeguards to better handle sensitive situations. Planned updates include ChatGPT parental controls that will allow parents to oversee conversations, monitor usage, and set restrictions for teenagers. The company is also exploring emergency contact features so trusted individuals can be alerted in moments of crisis.
OpenAI acknowledged that current systems are not foolproof, particularly during long chats where safeguards may weaken. To strengthen protections, the company is consulting more than 90 doctors across 30 countries. Additional features under consideration include one-click access to crisis helplines and potential connections with licensed therapists.
OpenAI stated, “Our top priority is making sure ChatGPT doesn’t make a hard moment worse,” reaffirming its focus on user safety and responsible AI deployment.






