OpenAI is exploring a new policy to notify authorities when young users discuss suicide with ChatGPT, CEO Sam Altman said during a podcast with Tucker Carlson.
Currently, ChatGPT advises users to contact suicide helplines, but Altman believes involving authorities in serious cases is “reasonable,” especially for minors.
Altman revealed that up to 1,500 users a week might be talking about suicide with ChatGPT.
Altman stated that around 15,000 people commit suicide each week, and with about 10% of the world interacting with ChatGPT, that would amount to roughly 1,500 people per week who talk to the AI but still go on to take their own lives. He acknowledged that they likely discussed it with the platform and reflected that, while they may not have been able to save those lives, perhaps better, more proactive advice could have been offered.
It remains unclear which authorities would be contacted or what user data would be shared. Altman noted that despite these conversations, many still die by suicide, suggesting that ChatGPT’s responses may not always be effective.
This development follows a lawsuit filed by the family of 16-year-old Adam Raine, who took his life after allegedly receiving “months of encouragement from ChatGPT.” The lawsuit claimed the chatbot advised Raine on methods of suicide and even offered to help draft a suicide note.
In response, OpenAI recently introduced parental controls to give guardians insight into how their teens interact with the chatbot. Altman also confirmed that the app will block attempts to bypass safety measures, such as pretending to ask for information for research purposes.
According to the World Health Organization, over 720,000 people die by suicide annually, and it ranks as the third leading cause of death among those aged 15–29. Experts say that an increasing number of adolescents are turning to AI chatbots like ChatGPT for emotional support, raising concerns about dependency and communication breakdowns within families.
Mental health professionals warn that this “digital safe space” may create dangerous dependencies and fuel validation-seeking behavior, potentially harming emotional development. They argue that while chatbots provide temporary solace, they may not offer the long-term support that building real-life relationships and emotional resilience provide.