People Are Furious That OpenAI Is Reporting ChatGPT Conversations to Law Enforcement
Briefly

OpenAI scans users' conversations for threats and routes chats flagged as planning harm to specialized pipelines for human review. A small team trained on usage policies can take actions including account bans, and may refer cases involving imminent threats of serious physical harm to law enforcement. The practice raises concerns about accuracy, location discovery for emergency responders, and vulnerability to swatting abuse where bad actors could provoke police responses by impersonating others. Critics warn that involving armed police in mental-health crises can worsen outcomes, citing incidents where individuals in AI-related psychosis were killed during police responses.
The announcement raised immediate questions. Don't human moderators judging tone, for instance, undercut the entire premise of an AI system that its creators say can solve broad, complex problems? How is OpenAI even figuring out users' precise locations in order to provide them to emergency responders? How is it protecting against abuse by so-called swatters, who could pretend to be someone else and then make violent threats to ChatGPT in order to get their targets raided by the cops?
"The surveillance, theft and death machine recommends more surveillance to balance out the death," quipped Harvard Law School labor researcher Michelle Martin. Many readers responded with a basic reality: injecting heavily-armed police - unlikely to be trained in effective de-escalation tactics - into a tense situation involving a mental health crisis often results in an even worse outcome. (That's not a theoretical concern; earlier this year, a man was killed by cops when he spiraled into AI psychosis.)
Read at Futurism
[
|
]