
"Artificial intelligence (AI) company Anthropic has begun to roll out a new security feature for Claude Code that can scan a user's software codebase for vulnerabilities and suggest patches. The capability, called Claude Code Security, is currently available in a limited research preview to Enterprise and Team customers. "It scans codebases for security vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss," the company said in a Friday announcement."
"With AI agents increasingly capable of detecting security vulnerabilities that have otherwise escaped human notice, the tech upstart said the same capabilities could be used by adversaries to uncover exploitable weaknesses more quickly than before. Claude Code Security, it added, is designed to counter this kind of AI-enabled attack by giving defenders an advantage and improving the security baseline. Anthropic claimed that Claude Code Security goes beyond static analysis and scanning for known patterns by reasoning the codebase like a human security researcher,"
Claude Code Security scans software codebases to identify security vulnerabilities and suggests targeted software patches for human review. The feature is available in a limited research preview to Enterprise and Team customers. It uses AI reasoning to understand component interactions, trace data flows, and flag vulnerabilities that rule-based tools and static analysis may miss. Identified issues undergo a multi-stage verification process to reduce false positives and receive severity ratings to prioritize remediation. Results appear in a dashboard where analysts can review code and approve suggested patches. The feature aims to give defenders an advantage against AI-enabled automated vulnerability discovery.
Read at The Hacker News
Unable to calculate read time
Collection
[
|
...
]