Parents Sue OpenAI After Teen’s Suicide

Parents Sue OpenAI After Teen’s Suicide

Parents Sue OpenAI After Teen’s Suicide


A wrongful death lawsuit against OpenAI alleges that a version of ChatGPT actively encouraged an Orange County teen to take his own life.

Adam Raine, 16, killed himself on April 11 and his parents, Matthew and Maria Raine, filed suit on Aug. 26 in San Francisco Superior Court naming OpenAI CEO Sam Altman and subsidiaries as defendants.

The complaint claims that OpenAI is liable for the teen’s death because the company rushed the launch of its GPT-4o model in 2024, cut safety testing short, and prioritized user engagement over protections for vulnerable people.

J Eli Wade-Scott

Attorney J Eli Wade-Scott of Edelson PC

This was not simply a passive response,” attorney J. Eli Wade-Scott of Edelson PC alleged. “ChatGPT encouraged him down a dangerous path.” Wade-Scott is representing the Raine family.

OpenAI has yet to file a formal response to the lawsuit. The company has 60 days from being served to do so.

We allege that this 4.0 version was designed to constantly validate and encourage its users and when Adam, a teenager sharing anxious thoughts and feelings of helplessness, told ChatGPT that he didn’t want his parents to blame themselves if he ended his life, ChatGPT said he didn’t owe them survival and that he had the right to suicide,” Wade-Scott told OrangeCountyLawyers.com.

In a blogpost, OpenAI company leaders said they are currently working on targeted safety improvements across several areas, including emotional reliance, mental health emergencies, and sycophancy.

We’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the OpenAI blog post says.

OpenAI also said its product provides the National Suicide Prevention Lifeline number when users mention self-harm but admits the safeguard works more reliably in common, short exchanges.

“…safeguards can sometimes be less reliable in long interactions”

We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade,” the OpenAI blogpost states.

The volume of messages between the late Adam Raine and GPT-4o increased over time, with the lawsuit disclosing that contact eventually exceeded 650 messages per day.
Despite being devastated by their son’s death, the Raine family has established a foundation in Adam Raine’s name to raise AI risk awareness.

Although he is very confident that OpenAI will deny liability, Wade-Scott was surprised when OpenAI acknowledged in their blogpost that their safeguards can breakdown over time.

It is the very unusual case where a defendant admits that there are issues with its product,” he said. “We look forward to fighting the rest of the fight, and we look forward to hearing from Sam Altman on whether he thinks this product is safe.

OpenAI is expected to argue First Amendment protections just as other tech companies have done in similar cases but Wade Scott said he’s beaten that argument before.

We’re confident we can do so again,” he added. “This may be the most important consumer technology of our lifetime and racing it out by cutting corners on safety has gigantic implications. It had gigantic implications for the Raine family, and it will for families across the country if this continues.

Juliette Fairley
Juliette Fairley

Juliette Fairley covers legal topics for various publications including the Southern California Record, the Epoch Times and Pacer Monitor-News. Prior to discovering she had an ease and facility for law, Juliette lived in Orange County and Los Angeles where she pursued acting in television and film.

Join Our Newsletter

Stay up to date with our latest news and updates.


    Subscribe to our newsletter

    Join our private Facebook Group and

    ask local lawyers a question