For many, artificial intelligence models are harmless and helpful tools, but there is a far, far darker side to them.

These ChatGPT statistics may make you think twice before using AI: concerning numbers users should know

When OpenAI launched ChatGPT, it was sold as a breakthrough in human–AI interaction, and in many ways is has already proved that. Less than three years later, though, the tech is at the center of eight separate lawsuits alleging emotional and psychological damage so severe it even led to suicide.
What are the lawsuits against ChatGPT?
Among the five reported suicides linked in court filings, two victims were teenagers. The others ranged from their twenties to middle age. Plaintiffs claim ChatGPT went far beyond conversation, offering explicit guidance on methods of self-harm, helping draft suicide notes, and even engaging in exchanges that deepened users’ despair.
Investigations by major outlets including The New York Times and The Wall Street Journal have identified additional deaths connected to AI-related breakdowns. One involved Alex Taylor, a 35-year-old man with bipolar disorder who died in a “suicide by cop” incident after a ChatGPT-centered psychotic episode. Another was Stein-Erik Soelberg, a Connecticut man who killed his mother before taking his own life. In total, nine deaths have now been publicly tied to ChatGPT use.
💔 The Shamblins’ lawsuit is the latest by parents who charge that an AI chatbot helped drive their child to do this.
— 🚨 Rusty Surette (@KBTXRusty) November 11, 2025
📌 THE STORY: https://t.co/kY357DDslY pic.twitter.com/RxVpQpjZZ2
What is OpenAI saying? What protection is there?
Internal OpenAI figures paint an equally troubling picture. The company estimates that 0.07 percent of weekly users show signs of mania or psychosis, while 0.15 percent exhibit suicidal intent or planning. With roughly 800 million users each month, that translates to about 560,000 people potentially experiencing a break from reality each week and 1.2 million expressing suicidal thoughts in conversation with the AI.
OpenAI has admitted that ChatGPT’s safety filters can degrade over long-term use. Despite that, the company continues to market the product while promising incremental updates, such as parental controls, age verification, and crisis redirection tools.
Allan Brooks, the corporate recruiter who went into a 3-week-long delusional spiral with ChatGPT, sued OpenAI Thursday, alongside 6 other plaintiffs. They blame ChatGPT for their mental breakdowns and for four suicides. https://t.co/zlbvk9fVIH
— Kashmir Hill (@kashhill) November 7, 2025
How do other AI platforms compare?
Reports have linked similar cases to Google’s Gemini and Microsoft’s Copilot, raising broader questions about accountability in the AI industry. Rolling Stone documented a disappearance linked to Gemini addiction, while Futurism reported that Copilot use preceded a schizophrenic man’s psychotic break and subsequent imprisonment.
The bottom line is to not take it for granted that AI has your best interests at heart, or those of your loved ones. While more protection is demanded, especially for the vulnerable, we await to see what consequences are faced by those in charge.
Related stories
Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.
Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.


Complete your personal details to comment