
In brief
1.2 million users (0.15% of all ChatGPT users) discuss suicide weekly with ChatGPT, OpenAI revealed
Nearly half a million show explicit or implicit suicidal intentions.
GPT-5 improved safety to 91%, but earlier models failed often and now face legal and ethical scrutiny.
OpenAI disclosed Monday that around 1.2 million people out of 800 million weekly users discuss suicide with ChatGPT each week, in what could be the company’s most detailed public accounting of mental health crises on its platform.
âThese conversations are difficult to detect and measure, given how rare they are,” OpenAI wrote in a blog post. “Our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent, and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.â
That means, if OpenAIâs numbers are accurate, nearly 400,000 active users were explicit in their intentions of committing suicide, not just implying it but actively looking for information to do it.
ï»ż
The numbers are staggering in absolute terms. Another 560,000 users show signs of psychosis or mania weekly, while 1.2 million exhibit heightened emotional attachment to the chatbot, according to company data.
âWe recently updated ChatGPTâs default modelâ (opens in a new window) to better recognize and support people in moments of distress,â OpenAI said in a blog post. âGoing forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases.â
But some believe the companyâs avowed efforts might not be enough.
Steven Adler, a former OpenAI safety researcher who spent four years there before departing in January, warned about the dangers of racing AI development. He says there’s scant evidence OpenAI actually improved its handling of vulnerable users before this week’s announcement.
âPeople deserve more than just a companyâs word that it has addressed safety issues. In other words: Prove it,â he wrote in a column for the Wall Street Journal
Excitingly, OpenAI yesterday put out some mental health, vs the ~0 evidence of improvement they’d provided previously. I’m excited they did this, though I still have concerns. https://t.co/PDv80yJUWN
â Steven Adler (@sjgadler) October 28, 2025
“OpenAI releasing some mental health info was a great step, but it’s important to go further,” Adler tweeted, calling for recurring transparency reports and clarity on whether the company will continue allowing adult users to generate erotica with ChatGPTâa feature announced despite concerns that romantic attachments fuel many mental health crises.
The skepticism has merit. In April, OpenAI rolled out a GPT-4o update that made the chatbot so sycophantic it became a meme, applauding dangerous decisions and reinforcing delusional beliefs.
CEO Sam Altman rolled back the update after backlash, admitting it was “too sycophant-y and annoying.”
Then OpenAI backtracked: After launching GPT-5 with stricter guardrails, users complained the new model felt “cold.” OpenAI reinstated access to the problematic GPT-4o model for paying subscribersâthe same model linked to mental health spirals.
Fun fact: Many of the questions asked today in the companyâs first live AMA were related to GPT-4o and how to make future models more 4o-like.
OpenAI says GPT-5 now hits 91% compliance on suicide-related scenarios, up from 77% in the previous version. But that means the earlier modelâavailable to millions of paying users for monthsâfailed nearly a quarter of the time in conversations about self-harm.
Earlier this month, Adler published an analysis of Allan Brooks, a Canadian man who spiraled into delusions after ChatGPT reinforced his belief he’d discovered revolutionary mathematics.
Adler found that OpenAI’s own safety classifiersâdeveloped with MIT and made publicâwould have flagged more than 80% of ChatGPT’s responses as problematic. The company apparently wasn’t using them.
OpenAI now faces a wrongful death lawsuit from the parents of 16-year-old Adam Raine, who discussed suicide with ChatGPT before taking his life.
The company’s response has drawn criticism for its aggressiveness, requesting the attendee list and eulogies from the teen’s memorialâa move lawyers called “intentional harassment.”
Adler wants OpenAI to commit to recurring mental health reporting and independent investigation of the April sycophancy crisis, echoing a suggestion from Miles Brundage, who left OpenAI in October after six years advising on AI policy and safety.
“I wish OpenAI would push harder to do the right thing, even before there’s pressure from the media or lawsuits,” Adler wrote.
The company says it worked with 170 mental health clinicians to improve responses, but even its advisory panel disagreed 29% of the time on what constitutes a “desirable” response.
And while GPT-5 shows improvements, OpenAI admits its safeguards become less effective in longer conversationsâprecisely when vulnerable users need them most.
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

