"Impersonations are to AI agents what hallucinations are to large language models, says Cohere's chief AI officer."
""One of the features of computer security in general is, often it's a bit of a cat-and-mouse game," said Joelle Pineau on an episode of the "20VC" podcast released on Monday. "There's a lot of ingenuity in terms of breaking into systems, and then you need a lot of ingenuity in terms of building defenses.""
""Whether it's infiltrating banking systems and so on, I do think we have to be quite lucid about this, develop standards, develop ways to test for that in a very rigorous way," she said."
""You run your agent completely cut off from the web. You're reducing your risk exposure significantly. But then you lose access to some information," she said. "So, depending on"
Impersonations are to AI agents what hallucinations are to large language models. Companies are integrating AI agents to perform multi-step tasks independently to increase speed and reduce costs, but those agents introduce security risks. AI agents may impersonate entities they do not legitimately represent and take unauthorized actions on behalf of organizations, with potential consequences including infiltrating banking systems. Mitigation approaches can dramatically reduce impersonation risks, for example by isolating agents from the web, though isolation reduces access to external information. Cohere, founded in 2019, focuses on enterprise models and competes with OpenAI, Anthropic, and Mistral; customers include Dell, SAP, and Salesforce.
 Read at Business Insider
Unable to calculate read time
 Collection 
[
|
 ... 
]