"Agentic AI systems are being weaponized." That's one of the first lines of Anthropic's new Threat Intelligence report, out today, which details the wide range of cases in which Claude - and likely many other leading AI agents and chatbots - are being abused. First up: "Vibe-hacking." One sophisticated cybercrime ring that Anthropic says it recently disrupted used Claude Code, Anthropic's AI coding agent, to extort data from at least 17 different organizations around the world within one month.
"The OpenAI rival spent 'many millions of dollars' buying used print books, which the company or its vendors then stripped of their bindings, cut the pages, and scanned into digital files."
The Advanced Research feature allows Claude to gather and analyze information from diverse sources, breaking it into sub-questions and compiling a detailed report with references.