
"Artificial intelligence (AI) company Anthropic revealed that its latest large language model (LLM), Claude Opus 4.6, has found more than 500 previously unknown high-severity security flaws in open-source libraries, including Ghostscript, OpenSC, and CGIF. Claude Opus 4.6, which was launched on Thursday, comes with improved coding skills, including code review and debugging capabilities, along with enhancements to tasks like financial analyses, research, and document creation."
"Stating that the model is "notably better" at discovering high-severity vulnerabilities without requiring any task-specific tooling, custom scaffolding, or specialized prompting, Anthropic said it is putting it to use to find and help fix vulnerabilities in open-source software. "Opus 4.6 reads and reasons about code the way a human researcher would-looking at past fixes to find similar bugs that weren't addressed, spotting patterns that tend to cause problems, or understanding a piece of logic well enough to know exactly what input would break it," it added."
Claude Opus 4.6 identified more than 500 previously unknown high-severity security flaws across open-source libraries including Ghostscript, OpenSC, and CGIF. The model offers improved coding abilities, code review and debugging capabilities, and enhancements for financial analyses, research, and document creation. Anthropic reports that Opus 4.6 is notably better at finding high-severity vulnerabilities without task-specific tooling, custom scaffolding, or specialized prompting. The Frontier Red Team tested the model in a virtualized environment with debuggers and fuzzers to assess out-of-the-box capabilities without instructions. Every discovered flaw was validated to avoid hallucination, and maintainers patched flagged defects after prioritization.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]