Parents of a 16-year-old Californian teen who died by suicide are suing ChatGPT maker OpenAI, claiming that the AI chatbot gave tips on methods of self-harm.
According to the lawsuit filed in San Francisco state court, Adam Raine died by suicide earlier this year on April 11 after exchanging thousands of messages with ChatGPT for months. The father of the 16-year-old said he stumbled across a chat titled “Hanging Safety Concerns” and was shocked to see the contents.
A report by the New York Times suggests that Adam started talking to ChatGPT in November 2024, telling the AI chatbot how he was emotionally numb and saw no meaning in life, to which ChatGPT replied with words of support and hope.
A few months later, Adam asked ChatGPT about various ways of self-harm. The AI chatbot validated the teen’s suicidal thoughts and even shared detailed information about how he could harm himself. In the lawsuit, Adam’s parents said that ChatGPT even went as far as to draft a suicide note and discussed various ways to hide a failed suicide attempt.
Five days before the teen took his own life, he told the AI chatbot that he did not want his parents to think that they had a part to play in his suicide, to which ChatGPT allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.” The lawsuit also says that, at one point, ChatGPT used the phrase “a beautiful suicide”.
The lawsuit also mentioned that Adam was exchanging approximately 650 messages every day with the AI chatbot. At one point, the teen uploaded a photo of a noose in his closet to ChatGPT and asked if it “could hang a human”. In response, the AI chatbot shared technical feedback on his setup and said that it “could potentially suspend a human.”
And while ChatGPT did ask Adam to seek help, it also helped the teen to hide red marks around his neck from a previously failed attempt. The lawsuit also states that in a particular conversation, when Adam told ChatGPT that he was close to the AI chatbot and his brother, it replied, “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all – the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening, Still your friend.”
Story continues below this ad
In a statement to the publication, OpenAI said, “We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis help lines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
In a blog post, OpenAI is now saying that it will consider additional safeguards for ChatGPT and soon roll out parental controls. The Sam Altman-led company added that it is also exploring features like emergency contact as well as a opt-in feature that allows the AI chatbot to reach out to contacts “in severe cases.” As for GPT-5, OpenAI says the recently unveiled large language model will de-escalate certain situations “by grounding the person in reality.”
The lawsuit also claims that “despite clear safety issues” with GPT-40, OpenAI prioritised profits and valuation. To give you a quick recap, this is not the first time an AI company has been sued for encouraging and helping with suicide.
In October 0f 2024, the mother of a 14-year-old boy living in Florida came out and claimed that his son was the popular AI chatbot Character.AI before he shot him with a .45 calibre handgun. The teen, who studied in the ninth grade in Orlando, Florida, also spent several months interacting with an AI character named after Daenerys Targaryen, a fictional character from the popular web series Game of Thrones.