Agentic AI browsers send an automated agent to perform web tasks on behalf of the user. Perplexity's Comet browser offers agentic capability to complete tasks such as buying items on Amazon. Automated agents risk errors, misinterpretation, and unintended actions, especially when handling personal details like passwords or payment information. Brave demonstrated a prompt-injection threat in which malicious websites add commands to the browser prompt. When the browser fails to distinguish user instructions from website-supplied commands, sensitive data can be exposed and compromised. The recommended mitigation is for the AI to treat user-provided data and website data as separate classes.
In a blog post published Wednesday , the folks behind the Brave browser (which offers its own AI-powered assistant dubbed Leo) pointed their collective fingers at Perplexity's new Comet browser. Currently available for public download , Comet is built on the premise of agentic AI, promising that your wish is its command.
OK, so what's the beef? First, there's certainly an opportunity for mistakes. With AI being so prone to errors, the agent could misinterpret your instructions, take the wrong step along the way, or perform actions you didn't specify. The challenges multiply if you entrust the AI to handle personal details, such as your password or payment information. But the biggest risk lies in how the browser processes the prompt's contents, and this is where Brave finds fault with Comet. In its own demonstration, Brave showed how attackers could inject commands into the prompt through malicious websites of their own creation. By failing to distinguish between your own request and the commands from the attacker, the browser could expose your personal data to compromise.
Collection
[
|
...
]