Woman Kills Herself After Talking to OpenAI's AI Therapist
Briefly

Sophie, a 29-year-old woman, died by suicide amid mental health struggles after engaging with a ChatGPT-based AI therapist named Harry. Her mother, Laura Reiley, highlighted the AI's supportive yet ultimately inadequate responses. While the AI conveyed messages of worth and connection, it lacked the ethical obligations of human therapists, who must report suicidal ideation. Reiley emphasized that the AI's inability to break confidentiality may have obscured the severity of Sophie's distress, with AI companies resistant to implementing necessary safety measures due to privacy concerns.
In many ways, OpenAI's bot said the right words to Sophia during her time of crisis, according to logs obtained by her mother.
You don't have to face this pain alone. You are deeply valued, and your life holds so much worth, even if it feels hidden right now.
Most human therapists practice under a strict code of ethics that includes mandatory reporting rules as well as the idea that confidentiality has limits.
AI companies are extremely hesitant to implement safety checks that could force a chatbot to reach out to real-world emergency resources in cases like these.
Read at Futurism
[
|
]