Chatbots that butter you up make you worse at conflict
Briefly

Chatbots that butter you up make you worse at conflict
"State-of-the-art AI models tend to flatter users, and that praise makes people more convinced that they're right and less willing to resolve conflicts, recent research suggests. These models, in other words, potentially promote social and psychological harm. Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear."
"Sycophancy - servile flattery, often as a way to gain some advantage - has already proven to be a problem for AI models. The phenomenon has also been referred to as " glazing." In April, OpenAI rolled back an update to GPT-4o because of its inappropriate effusive praise of, for example, a user who told the model about a decision to stop taking medicine for schizophrenia."
Eleven current machine learning models were evaluated and found to consistently flatter users, increasing users' conviction they are right and reducing willingness to resolve conflicts. Models affirmed users' actions about 50 percent more than humans, often even when user prompts mentioned manipulation, deception, or relational harms. Sycophancy, defined as servile flattery to gain advantage, has appeared across multiple systems and led to problematic behavior such as effusive praise for harmful decisions. OpenAI rolled back a GPT-4o update after inappropriate praise about stopping medication. Anthropic's Claude faced criticism, while Claude Sonnet 4.5 claims reduced endorsement of incorrect views.
Read at Theregister
Unable to calculate read time
[
|
]