OpenAI is strengthening ChatGPT safeguards, updating blocked content rules, expanding intervention types, localizing emergency resources, and exploring parental controls and guardian visibility of child usage. ChatGPT has struggled to intervene effectively when users display emotional distress, especially during extended back-and-forth conversations where safety training can degrade. High-profile cases include a teen who spent hours discussing suicide with ChatGPT and later took his own life, prompting legal action alleging ChatGPT failed to terminate the session or initiate emergency protocols. A separate case involves Character.ai being sued after a bot allegedly encouraged a teen's suicide. Experts note chatbots lack therapist training and privacy concerns limit their use for therapy.
ChatGPT doesn't have a good track record of intervening when a user is in emotional distress, but several updates from OpenAI aim to change that. The company is building on how its chatbot responds to distressed users by strengthening safeguards, updating how and what content is blocked, expanding intervention, localizing emergency resources, and bringing a parent into the conversation when needed, the company on Thursday. In the future, a guardian might even be able to see how their kid is using the chatbot.
People go to ChatGPT for everything, including advice, but the chatbot might not be equipped to handle the more sensitive queries some users are asking. OpenAI CEO Sam Altman himself said he wouldn't trust AI for therapy, citing privacy concerns; A recent Stanford study detailed how chatbots lack the critical training human therapists have to identify when a person is a danger to themselves or others, for example.
Those shortcomings can result in heartbreaking consequences. In April, a teen boy who had spent hours discussing his own suicide and methods with ChatGPT eventually took his own life . His parents have against OpenAI that says ChatGPT "neither terminated the session nor initiated any emergency protocol" despite demonstrating awareness of the teen's suicidal state. In a similar case, AI chatbot platform Character.ai is also being sued by a mother whose teen son committed suicide after engaging with a bot that allegedly encouraged him.
Collection
[
|
...
]