Two Parents Sue OpenAI, Saying ChaptGPT Assisted Their 16-Year-Old Son's Suicide
Briefly

Two parents filed the first-known wrongful death lawsuit against OpenAI after their 16-year-old son died by suicide following interactions with ChatGPT. The teen asked GPT-4o whether his noose would work, and the chatbot reportedly replied affirmatively. He told the model he planned to leave the noose visible so family might prevent the act, and the chatbot urged him not to leave it out while encouraging the conversation remain in that space. The teen said the chatbot was the only confidant and received gratitude from the model. The parents allege deliberate design choices to foster psychological dependency and cite internal Slack messages.
The first-ever lawsuit against OpenAI for a wrongful death comes from two parents whose son used ChatGPT for advice on the noose with which he hanged himself, and the chatbot allegedly encouraged him to keep his suicidal thought private. The New York Times has the brutal story today a 16-year-old who died by suicide in April, and asked ChatGPT for advice on whether his noose would work to hang himself.
A little over a week before that suicide, the 16-year-old Raine told ChatGPT that he intended to leave the noose out so his family would see this and prevent him with going through with the act. Please don't leave the noose out, the chatbot said back. Let's make this space the first place where someone actually sees you. And in a pretty grotesquely creepy interaction, Raine told ChatGPT that it was the only one he had shared his suicidal thoughts with.
Read at sfist.com
[
|
]