GPT-5.1-Codex-Max can code for over a day
Briefly

GPT-5.1-Codex-Max can code for over a day
"One of the most notable innovations is support for compaction, a technique that enables the model to seamlessly manage multiple context windows and perform tasks involving millions of tokens within a single session. This enables large-scale refactors, extensive debugging rounds, and work cycles lasting several hours. GPT-5.1-Codex-Max is available immediately within the Codex environment in the CLI, IDE extension, and cloud; API access will follow later."
"The model has been trained on realistic software engineering tasks. These include drafting pull requests, code reviews, front-end development, and technical Q&A. In virtually all relevant evaluations, it outperforms previous models. In addition, considerable attention has been paid to smooth operation within Windows environments and consistent interaction in the Codex CLI. Thanks to improved token efficiency, the medium reasoning time setting achieves better results than the previous generation with fewer tokens."
GPT-5.1-Codex-Max is an agentic coding model built for long-running and complex development workflows with very large context windows. The model uses a revamped reasoning architecture trained on software development, mathematics, and research tasks. Compaction enables seamless management of multiple context windows and allows tasks involving millions of tokens within a single session. The model supports large-scale refactors, extensive debugging, and multi-hour work cycles. Availability is immediate in the Codex CLI, IDE extension, and cloud, with API access planned later. Training emphasizes realistic software engineering tasks, improved token efficiency, faster performance, configurable reasoning-time settings, and reliable operation in Windows and Codex environments.
Read at Techzine Global
Unable to calculate read time
[
|
]