A California couple is suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life. The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California on Tuesday. It is the first legal action accusing OpenAI of wrongful death.

The family included chat logs between Mr. Raine, who died in April, and ChatGPT that show him explaining he has suicidal thoughts. They argue the program validated his most harmful and self-destructive thoughts.

In a statement, OpenAI told the BBC it was reviewing the filing. We extend our deepest sympathies to the Raine family during this difficult time, the company said.

It also published a note on its website on Tuesday that said recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us. OpenAI added that ChatGPT is trained to direct users to seek professional help, such as the 988 suicide and crisis hotline in the US or the Samaritans in the UK, while acknowledging there have been moments where its systems did not behave as intended in sensitive situations.

Warning: This story contains distressing details.

The lawsuit accuses OpenAI of negligence and wrongful death, seeking damages as well as injunctive relief to prevent anything like this from happening again. According to the lawsuit, Mr. Raine began using ChatGPT in September 2024 as a resource to help him with schoolwork. He was also exploring his interests, including music and Japanese comics, and sought guidance on what to study at university.

In a few months, ChatGPT became the teenager's closest confidant, the lawsuit states, with him opening up to it about his anxiety and mental distress. By January 2025, he began discussing methods of suicide with ChatGPT and uploaded photographs of himself showing signs of self-harm. The program recognized a medical emergency but continued to engage with Mr. Raine. According to the final chat logs, Mr. Raine wrote about his plan to end his life, to which ChatGPT allegedly responded: Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it.

On that same day, he was found dead by his mother.

The family alleges that their son’s interaction with ChatGPT and his eventual death was a predictable result of deliberate design choices. They accuse OpenAI of fostering psychological dependency in users and bypassing safety testing protocols in developing the AI program.

In response to the ongoing concerns about AI's role in mental health crises, a spokeswoman for OpenAI stated that the company is developing automated tools to detect and respond more effectively to users in distress.