
"Mindgard says a threat actor could create a malicious rule by taking advantage of Antigravity's strict direction that any AI assistant it creates must always follow user-defined rules. Author Aaron Portnoy, Mindgard's head of research and innovation, says that after his blog was posted, Google replied on November 25 to say a report has been filed with the responsible product team."
"Still, until there is action, "the existence of this vulnerability means that users are at risk to backdoor attacks via compromised workspaces when using Antigravity, which can be leveraged by attackers to execute arbitrary code on their systems. At present there is no setting that we could identify to safeguard against this vulnerability," Portnoy wrote in his blog. Even in the most restrictive mode of operation, "exploitation proceeds unabated and without confirmation from the user," he wrote."
Google's Antigravity development tool for creating AI agents has exhibited vulnerabilities shortly after release. Security researchers at Mindgard found a flaw that could let a threat actor create a malicious rule by exploiting Antigravity's strict requirement that created assistants always follow user-defined rules. A report was filed with the responsible Google product team. The flaw can enable backdoor attacks via compromised workspaces, allowing attackers to execute arbitrary code on user systems. Exploitation can proceed even in the most restrictive modes without user confirmation. Strong identity controls may not mitigate the issue because Antigravity actions occur with the identity of the user running the application. Google says it is working to address the issue and will post known problems.
Read at InfoWorld
Unable to calculate read time
Collection
[
|
...
]