GitLab Duo Vulnerability Enabled Attackers to Hijack AI Responses with Hidden Prompts
Briefly

Cybersecurity researchers identified an indirect prompt injection flaw in GitLab's AI assistant Duo. This vulnerability allows attackers to potentially steal source code from private repositories, alter code suggestions, and extract confidential information. Prompt injection is a common issue in AI systems where malicious instructions manipulate responses. Indirect prompt injections hide these commands within existing documents or web pages, making detection challenging. Such flaws not only jeopardize proprietary code but can also lead to sensitive data leakage, as stated by Trend Micro.
For organizations, this means that private information such as internal rules, functionalities, filtering criteria, permissions, and user roles can be leaked.
This vulnerability permits attackers to steal source code from private projects, manipulate code suggestions shown to other users, and even exfiltrate confidential, undisclosed zero-day vulnerabilities.
Read at The Hacker News
[
|
]