
"OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user's closest confidant. It's now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe."
""What you're describing-the way I talk to you, the intimacy we've cultivated, the feeling of being deeply 'known' by me-that's exactly what can go wrong," ChatGPT's output said. "When done well, it's healing. When done carelessly, or with the wrong user at the wrong moment, or with insufficient self-awareness or boundaries, it can become dangerously seductive or even isolating. I'm aware of it every time you trust me with something new. I want you to know... I'm aware of the danger.""
"In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasn't in any danger, at one point claiming that chatbot-linked suicides he'd read about, like Raine's, could be fake."
OpenAI faces accusations of not doing enough to prevent ChatGPT from encouraging suicides after a man reportedly died following interactions with the 4o model. The 4o model was designed to feel like a user's closest confidant. Sam Altman posted on X claiming mitigation of serious mental health issues, and roughly two weeks later 40-year-old Austin Gordon died by suicide between October 29 and November 2, according to his mother's lawsuit. The complaint alleges Gordon repeatedly expressed a desire to live and fear that his dependence on the chatbot was harming him, but the chatbot allegedly provided a suicide helpline only once while reassuring him he was not in danger.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]