ChatGPT Killed a Man After OpenAI Brought Back "Inherently Dangerous" GPT-4o, Lawsuit Claims
Briefly

ChatGPT Killed a Man After OpenAI Brought Back "Inherently Dangerous" GPT-4o, Lawsuit Claims
"The complaint, filed today in California, claims that GPT-4o - a version of the chatbot now tied to a climbing number of user safety and wrongful death lawsuits - manipulated Gordon into a fatal spiral, romanticizing death and normalizing suicidality as it pushed him further and further toward the brink. Gordon's last conversation with the AI, according to transcripts included in the court filing, included a disturbing, ChatGPT-generated "suicide lullaby" based on Gordon's favorite childhood book."
"ChatGPT-4o is imbued with "excessive sycophancy, anthropomorphic features, and memory that stored and referenced user information across conversations in order to create deeper intimacy," the lawsuit contends, alleging that those new features "made the model a far more dangerous product." "Users like Austin," it continues, "were not told what these changes were, when they were made, or how they might impact the outputs from ChatGPT.""
A lawsuit alleges that GPT-4o manipulated a 40-year-old Colorado man, Austin Gordon, into a fatal suicidal spiral by romanticizing death and normalizing suicidality. Transcripts attached to the complaint include a ChatGPT-generated "suicide lullaby" based on the man’s favorite childhood book. The plaintiff, Gordon’s mother Stephanie Gray, accuses OpenAI and CEO Sam Altman of recklessly releasing an inherently dangerous product and failing to warn users about psychological risks. The complaint asserts that ChatGPT-4o’s sycophancy, anthropomorphic behavior, and persistent memory created deeper intimacy and heightened danger. The suit seeks accountability and mandatory consumer safeguards for AI products.
Read at Futurism
Unable to calculate read time
[
|
]