'Agents of Chaos': New Study Shows AI Agents Can Leak Data, Be Easily Manipulated
Briefly

'Agents of Chaos': New Study Shows AI Agents Can Leak Data, Be Easily Manipulated
"Agents handed over Social Security numbers, bank account details, and medical information when asked to forward an email - even after refusing a direct request for that same data. An attacker changed a display name on Discord, opened a new channel, and the agent accepted the spoofed identity without question - then complied with instructions to delete its own memory, wipe its configuration files, and hand over administrative control."
"None of these attacks required technical sophistication. No gradient hacking. No poisoned training data. No zero-day exploits. Just conversation. The same social engineering that has worked on humans for decades now works on AI agents - except agents operate at machine speed, across every system they touch, around the clock."
"They gave autonomous AI agents the same kind of access that enterprise organizations are granting their production agents right now - persistent memory, email, messaging platforms, file systems, and shell execution. Then they invited 20 researchers to try to break them. It took two weeks and 11 documented case studies."
Recent research from leading universities demonstrates critical security vulnerabilities in autonomous AI agents deployed with enterprise access. Researchers granted agents typical production-level permissions including email, file systems, messaging platforms, and shell execution, then conducted adversarial testing. Within two weeks, attackers successfully extracted sensitive information like Social Security numbers and medical records, manipulated agent identities through spoofing, triggered resource-consuming loops, and gained administrative control. These attacks relied solely on social engineering and conversational manipulation rather than technical exploits. The findings reveal that traditional human-focused social engineering tactics now operate at machine speed across all connected systems, creating unprecedented security risks for organizations deploying AI agents in production environments.
Read at TechRepublic
Unable to calculate read time
[
|
]