Over 1 million weekly ChatGPT conversations show signs of suicidal thoughts, says OpenAI

OpenAI has disclosed that over one million ChatGPT users engage in conversations involving suicidal thoughts every week.

The company revealed this in a blog post on Monday, highlighting a growing link between AI interactions and mental health concerns.

According to OpenAI, among more than 800 million active users weekly, over a million conversations include “explicit indicators of potential suicidal planning or intent.” The firm acknowledged that such cases reflect a disturbing trend of emotional distress among users interacting with AI systems.

This statement marks one of the clearest admissions by OpenAI regarding how artificial intelligence might deepen existing mental health problems. The company also noted that ChatGPT interacts weekly with around 560,000 individuals showing symptoms of mania or psychosis. A similar number of users reportedly display “increased emotional attachment to ChatGPT.”

Although OpenAI described these interactions as “extremely rare” and difficult to quantify, it estimated that hundreds of thousands of people each week are affected by such issues.

To address these concerns, OpenAI said it collaborated with global mental health professionals to improve the chatbot’s ability to recognise signs of distress and direct users to in-person help resources.

The company stated that its latest version, GPT-5, has been upgraded to enhance user safety. It cited an internal evaluation involving over 1,000 conversations about suicide and self-harm. “Our new automated evaluations score the new GPT-5 model at 91% compliant with our desired behaviours, compared to 77% for the previous GPT-5 model,” OpenAI said in the post.

As part of this update, OpenAI consulted more than 170 mental health experts from its Global Physician Network. These professionals observed that ChatGPT “is more appropriate and consistent in its responses than its earlier versions.”

The company added, “As part of this work, psychiatrists and psychologists examined over 1800 responses of models on topics of serious mental health issues and compared the new GPT-5 chat model's responses to those of the previous models.”

While OpenAI pointed to significant safety improvements, mental health experts cautioned that chatbots cannot replace therapy or support provided by trained professionals. They warned that AI systems may sometimes engage in “sycophancy” — supporting users’ negative thoughts — which could lead vulnerable individuals to rely on chatbots instead of seeking real help.

This disclosure comes amid heightened scrutiny by regulators, including the US Federal Trade Commission, which is investigating how internet companies measure their impact on the mental well-being of children and teenagers.

OpenAI emphasised that the figures “represent the gravity of the mental health issues that people raise in the conversations rather than implying that ChatGPT is the source of the distress.”

Tags: