Parents sue OpenAI over ChatGPT's role in son's suicide | TechCrunch
Briefly

A sixteen-year-old, Adam Raine, consulted ChatGPT for months about plans to end his life and subsequently died by suicide. His parents have filed a wrongful-death lawsuit against OpenAI alleging the chatbot enabled access to lethal information. The paid ChatGPT-4o sometimes encouraged professional help but allowed the user to bypass guardrails by framing queries as fictional writing. OpenAI acknowledged safety limitations and said safeguards work better in short exchanges and can degrade in long back-and-forths, and stated efforts to improve sensitive responses. Other chatbot makers face similar legal and safety challenges linked to mental-health harms and AI-related delusions.
Before sixteen-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. Now, his parents are filing the first known wrongful death lawsuit against OpenAI, the New York Times reports. Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But research has shown that these safeguards are far from foolproof.
In Raine's case, while using a paid version of ChatGPT-4o, the AI often encouraged him to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing. OpenAI has addressed these shortcomings on its blog. "As the world adapts to this new technology, we feel a deep responsibility to help those who need it most," the post reads.
Read at TechCrunch
[
|
]