Anthropic releases Sonnet 4.6 | TechCrunch
Briefly

Anthropic releases Sonnet 4.6 | TechCrunch
"Anthropic has released a new version of its mid-size Sonnet model, keeping pace with the company's four-month update cycle. In a post announcing the new model, Anthropic emphasized improvements in coding, instruction-following, and computer use. Sonnet 4.6 will be the default model for Free and Pro plan users. The beta release of Sonnet 4.6 will include a context window of 1 million tokens, twice the size of the largest window previously available for Sonnet."
"Anthropic described the new context window as "enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request." The release comes just two weeks after the launch of Opus 4.6, with an updated Haiku model likely to follow in the coming weeks. The launch arrives with a new set of record benchmark scores, including OS World for computer use and SWE-Bench for software engineering."
"The launch arrives with a new set of record benchmark scores, including OS World for computer use and SWE-Bench for software engineering. But perhaps the most impressive is its 60.4% score on ARC-AGI-2, meant to measure skills specific to human intelligence. The score puts Sonnet 4.6 above most comparable models, although it still trails models like Opus 4.6, Gemini 3 Deep Think, and one refined version of GPT 5.2."
Sonnet 4.6 is a mid-size model with targeted improvements in coding, instruction-following, and computer use. The model will be the default for Free and Pro plan users. The beta includes a 1 million-token context window, twice Sonnet's previous maximum, enabling entire codebases, lengthy contracts, or multiple research papers in a single request. The release follows Opus 4.6 and precedes an updated Haiku model. Sonnet 4.6 achieved record benchmark scores including OS World and SWE-Bench and scored 60.4% on ARC-AGI-2, placing it above most comparable models while trailing a few top-tier models.
Read at TechCrunch
Unable to calculate read time
[
|
]