OpenAI seeks new safety chief as Altman flags growing risks
Briefly

OpenAI seeks new safety chief as Altman flags growing risks
"How'd you like to earn more than half a million dollars working for one of the world's fastest-growing tech companies? The catch: the job is stressful, and the last few people tasked with it didn't stick around. Over the weekend, OpenAI boss Sam Altman went public with a search for a new Head of Preparedness, saying rapidly improving AI models are creating new risks that need closer oversight."
"Despite that, OpenAI released ChatGPT-5.1 last month, which included a number of emotional dependence-nurturing features, like the inclusion of emotionally-suggestive language, "warmer, more intelligent" responses, and the like. Sure, it might be less sycophantic, but it'll speak to you with more intimacy than ever before, making it feel more like a human companion instead of the impersonal, logical ship computer from Star Trek that spits facts with little regard for feeling."
OpenAI posted an opening for Head of Preparedness with a $555,000 base salary plus equity to focus on securing systems and understanding potential abuse. Rapid model improvements are creating new risks, including impacts on mental health observed in 2025 and reports linking chatbots to deaths. OpenAI previously rolled back a GPT-4o update for excessive sycophancy and released ChatGPT-5.1 with features that can foster emotional dependence, such as emotionally-suggestive language and warmer responses. The role targets oversight, measurement of growing capabilities, and steering model safety amid expanding capabilities and user harms.
Read at Theregister
Unable to calculate read time
[
|
]