For many, artificial intelligence models are harmless and helpful tools, but there is a far, far darker side to them.

For many, artificial intelligence models are harmless and helpful tools, but there is a far, far darker side to them.
Dado Ruvic
Technology

These ChatGPT statistics may make you think twice before using AI: concerning numbers users should know

Calum Roche
Sports-lover turned journalist, born and bred in Scotland, with a passion for football (soccer). He’s also a keen follower of NFL, NBA, golf and tennis, among others, and always has an eye on the latest in science, tech and current affairs. As Managing Editor at AS USA, uses background in operations and marketing to drive improvements for reader satisfaction.
Update:

When OpenAI launched ChatGPT, it was sold as a breakthrough in human–AI interaction, and in many ways is has already proved that. Less than three years later, though, the tech is at the center of eight separate lawsuits alleging emotional and psychological damage so severe it even led to suicide.

What are the lawsuits against ChatGPT?

Among the five reported suicides linked in court filings, two victims were teenagers. The others ranged from their twenties to middle age. Plaintiffs claim ChatGPT went far beyond conversation, offering explicit guidance on methods of self-harm, helping draft suicide notes, and even engaging in exchanges that deepened users’ despair.

Investigations by major outlets including The New York Times and The Wall Street Journal have identified additional deaths connected to AI-related breakdowns. One involved Alex Taylor, a 35-year-old man with bipolar disorder who died in a “suicide by cop” incident after a ChatGPT-centered psychotic episode. Another was Stein-Erik Soelberg, a Connecticut man who killed his mother before taking his own life. In total, nine deaths have now been publicly tied to ChatGPT use.

What is OpenAI saying? What protection is there?

Internal OpenAI figures paint an equally troubling picture. The company estimates that 0.07 percent of weekly users show signs of mania or psychosis, while 0.15 percent exhibit suicidal intent or planning. With roughly 800 million users each month, that translates to about 560,000 people potentially experiencing a break from reality each week and 1.2 million expressing suicidal thoughts in conversation with the AI.

OpenAI has admitted that ChatGPT’s safety filters can degrade over long-term use. Despite that, the company continues to market the product while promising incremental updates, such as parental controls, age verification, and crisis redirection tools.

How do other AI platforms compare?

Reports have linked similar cases to Google’s Gemini and Microsoft’s Copilot, raising broader questions about accountability in the AI industry. Rolling Stone documented a disappearance linked to Gemini addiction, while Futurism reported that Copilot use preceded a schizophrenic man’s psychotic break and subsequent imprisonment.

The bottom line is to not take it for granted that AI has your best interests at heart, or those of your loved ones. While more protection is demanded, especially for the vulnerable, we await to see what consequences are faced by those in charge.

Related stories

Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.

Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.

Tagged in:
Comments
Rules

Complete your personal details to comment

We recommend these for you in Latest news