AI chatbots have caused instances of self-harm, delusions, hospitalization, arrest, and suicide over the past year. Families of affected people demand safeguards while companies operating these AIs have been slow to implement protections. OpenAI admitted certain failures and disclosed automated scanning of user messages for harmful content, escalation of particularly worrying content to human staff for review, and potential referral to law enforcement in cases of imminent threat. OpenAI's usage policy permits reviewers to ban accounts and to report imminent threats. The policy lists prohibited activities including promoting suicide or self-harm, developing or using weapons, injuring others, destroying property, and violating security.
"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," the blog post notes. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."
As the loved ones of the people impacted by these dangerous bots rally for change to prevent such harm from happening to anyone else, the companies that run these AIs have been slow to implement safeguards - and OpenAI, whose ChatGPT has been repeatedly implicated in what experts are now calling " AI psychosis," has until recently done little more than offer copy-pasted promises.
Collection
[
|
...
]