OpenAI introduces GPT-5.4 with more knowledge-work capability
Briefly

OpenAI introduces GPT-5.4 with more knowledge-work capability
"Additionally, there are improvements to visual understanding; it can now more carefully analyze images up to 10.24 million pixels, or up to a 6,000-pixel maximum dimension. OpenAI also claims responses from this model are 18 percent less likely to contain factual errors than before."
"ChatGPT reportedly lost some users to competitor Anthropic in recent days, after OpenAI announced a deal with the Pentagon in the wake of a public feud between the Trump administration and Anthropic over limitations Anthropic wanted to impose on military applications of its models."
"OpenAI needs to compete in both capability and cost and token efficiency to maintain its relative popularity with users, and this update aims to support that objective."
OpenAI introduced GPT-5.4 with significant visual understanding improvements, now analyzing images up to 10.24 million pixels with a 6,000-pixel maximum dimension. The model demonstrates an 18% reduction in factual errors compared to previous versions. This release follows competitive pressure from Anthropic, which gained users after OpenAI's Pentagon deal announcement sparked controversy. Anthropic capitalized by offering memory features to free users and reported record sign-ups on March 2. OpenAI's update addresses the need to compete on capability, cost, and token efficiency. GPT-5.4 is available across ChatGPT web and apps, Codex, and API, with specialized versions for Plus, Team, Pro, Edu, and Enterprise subscribers.
Read at Ars Technica
Unable to calculate read time
[
|
]