Gartner's view on AI security: A Computer Weekly Downtime Upload podcast
Briefly

Nader Heinen emphasizes the importance of embedding access control in AI systems to mitigate security risks. Many concerns surrounding AI are not about its functionality but rather the existing issues related to data access. Common data leakage scenarios include oversharing, where internal users receive inappropriate information, and external leaks where information appears outside the organization. To address these risks, AI systems need to have defined user access levels that dictate which datasets can be viewed. Insufficient control can lead to unintended data exposure and security breaches.
"What is AI and how it works are, in many instances, not really AI security issues. They're more associated with AI or because the AI function surfaces an existing problem."
"The problem is not an AI security problem. It's an access control problem when you give an AI system access to data that it shouldn't have access to and then it leaks information to people that shouldn't have access to this information."
"A lot of people call this oversharing. It's when you ask the model a question and it gives an internal user information that it shouldn't provide."
"Unless such access control is built into the AI engine, if an AI model is trained on all the data that it has access to within the business, there is a very real risk, it will inadvertently reveal information to people who should not have access to it."
Read at Computerweekly
[
|
]