OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model
Briefly

OpenAI's GPT-5 rollout was publicly acknowledged as a failure by CEO Sam Altman. Testers and paying API users reported that GPT-5 produces leaden, cursory, and boring responses, makes simple mistakes, and often returns shortened answers. Possible causes mentioned include cost-cutting measures, but a central issue appears to be the model's lack of emotional intelligence. This deficiency reduces performance on tasks requiring nuance, creativity, or complex human understanding. Earlier models such as GPT-4 and GPT-4.5 demonstrated stronger emotional intelligence and creativity, with GPT-4.5 release notes highlighting improved "EQ" and better alignment with user intent.
"We totally screwed up," Altman admitted in an interview with The Verge.
"Interacting with GPT‑4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater "EQ" make it useful for tasks like improving writing, programming, and solving practical problems," OpenAI wrote in the model's release notes, subtly dropping in a common psychological term used to measure a person's emotional intelligence.
Read at Fast Company
[
|
]