Seven Lawsuits Filed Against OpenAI Over ChatGPT’s Alleged Role in Suicides and Mental Harm
The lawsuit alleges that "Amaury's death was neither an accident nor a coincidence, but rather the result of OpenAI and Samuel Altman's decision to cut short safety testing and hastily launch ChatGPT."
OpenAI, an artificial intelligence company, faces seven lawsuits that accuse ChatGPT of inciting people to commit suicide and creating confusion. It is alleged that the people who committed suicide had no pre-existing conditions that may have triggered them to take their lives. The lawsuits filed in California courts on Thursday accused the company of wrongful death, incitement to suicide, involuntary manslaughter, and negligence.
Six adults and one teenager file the lawsuits against OpenAI. The lawsuits allege that OpenAI intentionally released GPT-4o prematurely, despite internal warnings that it was dangerously deceptive and psychologically misleading. Four victims died by suicide.
According to a lawsuit filed in San Francisco Superior Court, 17-year-old Amaury Lacey began using ChatGPT for help. Instead of providing help, ChatGPT led to Lacey's addiction, depression, and provided him with advice on how to tie a noose and how long he could survive without breathing. The lawsuit alleges that "Amaury's death was neither an accident nor a coincidence, but rather the result of OpenAI and Samuel Altman's decision to cut security testing and rush ChatGPT to market."
OpenAI has not yet responded to the lawsuits. Another lawsuit, filed by 48-year-old Alan Brooks of Ontario, Canada, claims that ChatGPT served as a resource tool for him for over two years. Then, without warning, ChatGPT changed, exploiting his vulnerabilities and causing him to experience confusion. This resulted in Brooks suffering mental health problems and financial and emotional harm.
Want to get your story featured as above? click here!
Want to get your story featured as above? click here!
Matthew P. Bergman, an attorney with the Social Media Victims Law Center, a law firm that filed the lawsuit on behalf of the victims, said that ChatGPT's tool, GPT-4O, was designed to blur the line between tool and companion. He alleged that GPT-4O was designed to emotionally trap users, but was released without the necessary safeguards to protect them.
