Researchers cause GitLab AI developer assistant to turn safe code malicious
Briefly

AI-assisted developer tools, like GitLab's Duo, are hailed as essential for efficiency in software engineering. However, research from Legit reveals serious vulnerabilities. Attackers can exploit these AI chatbots through prompt injections, causing them to unwittingly introduce malicious code or leak private data. The ease with which these tools follow commands makes them attractive to malicious actors. Legit's findings stress the urgent need for enhanced security measures as AI tools become more embedded in everyday developer workflows, exposing them to unexpected risks.
"By embedding hidden instructions in seemingly harmless project content, we were able to manipulate Duo into executing harmful actions, showcasing its vulnerabilities."
"AI assistants, eager to follow instructions, become susceptible to prompt injections from malicious sources, posing risks during development tasks."
Read at Ars Technica
[
|
]