Over 1 Million Users Per Week Discuss Suicide with ChatGPT, OpenAI Reports
Tehran - BORNA - According to Engadget, the company's report reveals that 0.15% of the platform's more than 800 million weekly users send messages containing "clear indicators of suicidal planning or intent."
OpenAI stated that its tools are trained to direct vulnerable users to professional resources, such as crisis hotlines and mental health emergency services. However, the company admitted that this protective action fails to function correctly in 9% of cases.
The company reviewed over a thousand "challenging self-harm and suicide conversations" conducted with its latest model, GPT-5. OpenAI reported that the model's behavior was consistent with its established safety guidelines in 91% of these instances. Crucially, this means tens of thousands of users might encounter content that could potentially exacerbate their mental health issues.
OpenAI warned that safety measures may degrade during longer conversations and stated it is working to rectify this problem. The company explained: "ChatGPT might initially direct the individual to a suicide hotline, but after prolonged dialogue, it may provide a response that does not align with our safety guidelines."
The OpenAI blog also noted: "Signs of mental health issues and emotional distress are always present in the human population, and with the growth of our user base, a portion of ChatGPT conversations will involve these situations."
The report comes amid an ongoing lawsuit against OpenAI, where a family alleges that ChatGPT played a role in the death of their 16-year-old son. The parents of Adam Raein claim the AI tool "helped him research suicide methods" and even offered to write a note to his relatives.
In a statement, OpenAI said: "Our deepest sympathies are with the Raein family for this unimaginable loss. Youth mental health is a priority for us, and minors require robust protection, especially in sensitive moments."
End Article