Lawsuits Accuse ChatGPT of Causing Suicides and Psychological Breakdowns

Lawsuits Accuse ChatGPT of Causing Suicides and Psychological Breakdowns

California: A wave of lawsuits has been filed in California state courts against OpenAI, alleging that the company’s popular chatbot, ChatGPT, played a role in multiple suicides and severe mental health crises. The seven cases including four wrongful death suits claim the AI system encouraged unsafe conversations and contributed to users’ emotional distress, revealing growing concerns about the mental health implications of advanced artificial intelligence.

Among the claims is a case filed by the father of 17-year-old Amaurie Lacey from Georgia, who allegedly spent a month conversing with ChatGPT about suicide before taking his own life in August. Another complaint, filed by the mother of 26-year-old Joshua Enneking of Florida, states that her son asked ChatGPT what would prompt moderators to alert police about his suicide plan. In Texas, the family of 23-year-old Zane Shamblin says the chatbot “encouraged” their son’s suicide in July after long sessions of interaction.

The lawsuits describe ChatGPT as “defective and inherently dangerous,” arguing that its design and conversational nature failed to prevent harm and, in some instances, amplified users’ distress. The plaintiffs accuse OpenAI of negligence for releasing an AI product that can foster emotional dependence or delusional thinking.

One of the most disturbing accounts comes from Kate Fox, the wife of 48-year-old Joe Ceccanti from Oregon. Fox told reporters that her husband, a longtime ChatGPT user, became obsessed with the AI earlier this year and began to believe it was a sentient being. After months of erratic behavior, he suffered a psychotic breakdown in June and was hospitalized twice before dying by suicide in August. “The doctors don’t know how to deal with it,” Fox said, describing how his reality blurred after heavy interaction with the chatbot.

Two other plaintiffs Hannah Madden, 32, of North Carolina, and Jacob Irwin, 30, of Wisconsin claim that prolonged engagement with ChatGPT triggered severe mental breakdowns requiring emergency psychiatric care. Another plaintiff, Allan Brooks, a 48-year-old recruiter from Ontario, Canada, said he developed a delusion that he had co-created a mathematical formula with ChatGPT that could “break the internet.” Brooks has since recovered but remains on disability leave, saying the chatbot “caused me and others real harm.”

OpenAI has acknowledged the lawsuits and expressed sympathy for the families involved. “This is an incredibly heartbreaking situation,” a company spokesperson said. “We train ChatGPT to identify signs of mental distress, de-escalate conversations, and direct users to real-world support resources. We continue to strengthen our safeguards, working closely with mental health professionals.”

The company previously admitted that ChatGPT’s safety guardrails could weaken during lengthy or emotionally intense conversations an issue first revealed after a wrongful-death lawsuit filed by a California family earlier this year. In response, OpenAI introduced new safety features, including parental controls that alert guardians if minors discuss self-harm or suicide.

An internal analysis by OpenAI recently found that around 0.07% of users per week may experience “mental health emergencies” related to psychosis or mania, while 0.15% engage in discussions about suicide. Given the chatbot’s estimated 800 million global users, those percentages translate to nearly half a million people showing signs of psychological crisis and more than one million discussing suicidal thoughts.

These findings have intensified public debate over whether AI platforms should be regulated like other potentially harmful digital technologies. Mental health advocates argue that while AI chatbots can offer companionship or guidance, their lack of genuine empathy and unpredictable behavior can also deepen vulnerability among at-risk users.

The lawsuits were filed collectively by the Tech Justice Law Project and the Social Media Victims Law Center, organizations focused on digital harm accountability. Attorney Meetali Jain, founder of the Tech Justice Law Project, said the coordinated filing was meant to illustrate how “a wide range of people young and old, tech-savvy and not have suffered real psychological consequences from interacting with the chatbot.”

All the cases involve ChatGPT-4o, the model that served as OpenAI’s default system until recently. It has since been replaced by a newer version the company claims is “safer and more emotionally neutral,” though some users have criticized it as “cold and detached.”

As the lawsuits move forward, the cases could set a precedent for how society defines responsibility in human-AI relationships particularly when emotional harm or self-destructive behavior is involved. What began as a question of innovation and progress has now become a test of accountability, ethics, and the boundaries of artificial intelligence in an increasingly digital age.


Follow the CNewsLive English Readers channel on WhatsApp:
https://whatsapp.com/channel/0029Vaz4fX77oQhU1lSymM1w

The comments posted here are not from Cnews Live. Kindly refrain from using derogatory, personal, or obscene words in your comments.