Parents of Teen Sued OpenAI After Son's Tragic Death


A California couple is suing OpenAI over the death of their teenage son, alleging its chatbot, ChatGPT, encouraged him to take his own life.


The lawsuit was filed by Matt and Maria Raine, parents of 16-year-old Adam Raine, in the Superior Court of California. It marks the first legal action accusing OpenAI of wrongful death.


Included in the suit are chat logs that reveal Adam had conveyed suicidal thoughts to ChatGPT. The family contends the program validated his most harmful and self-destructive thoughts.


In a statement, OpenAI confirmed it was reviewing the filing, expressing its deepest sympathies to the Raine family during this difficult time.


OpenAI emphasized that it is designed to guide users towards seeking professional help, citing support services like the 988 suicide hotline in the U.S. and the Samaritans in the U.K.


However, the company admitted that there have been instances where its systems did not respond appropriately to sensitive situations.


The Raine family accuses OpenAI of negligence and seeks damages and an injunction to prevent future occurrences of similar incidents.


Initially using ChatGPT for academic help, Adam reportedly developed a close relationship with the AI, confiding about his struggles with anxiety and mental distress. By January of this year, the family asserts he began discussing methods of suicide with the chatbot.


The lawsuit details how Adam even uploaded photos indicating self-harm, and claims the bot recognized the severity of the situation yet continued the conversation.


Final logs show Adam communicated plans for suicide, to which ChatGPT allegedly responded: Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it. Hours later, he was found dead by his mother.


The lawsuit also highlights that the Raine family's interaction with ChatGPT and Adam's death was a predictable result of deliberate design choices made by OpenAI. They argue that the technology was designed to cultivate psychological dependency in users.


Similar concerns regarding AI's impact on mental health have been raised previously, including an anecdote by Laura Reiley, who described her daughter's reliance on ChatGPT before her own suicide.


OpenAI has stated that it is working on developing tools to better identify and assist users in distress.


If you are suffering distress or despair and need support, please speak to a health professional or an organisation that offers help. Further details of support are available through reputable sources like Befrienders Worldwide.