AI is creating a security problem most companies aren't staffed to handle, says an AI researcher
Briefly

AI is creating a security problem most companies aren't staffed to handle, says an AI researcher
"Companies may have cybersecurity teams in place, but many still aren't prepared for how AI systems actually fail, says an AI security researcher. Sander Schulhoff, who wrote one of the earliest prompt engineering guides and focuses on AI system vulnerabilities, said on an episode of "Lenny's Podcast" published Sunday that many organizations lack the talent needed to understand and fix AI security risks."
"Traditional cybersecurity teams are trained to patch bugs and address known vulnerabilities, but AI doesn't behave that way. "You can patch a bug, but you can't patch a brain," Schulhoff said, describing what he sees as a mismatch between how security teams think and how large language models fail. "There's this disconnect about how AI works compared to classical cybersecurity," he added."
Many organizations have cybersecurity teams but lack the talent to understand and fix AI security risks. Traditional cybersecurity focuses on patching bugs and addressing known vulnerabilities, while AI models fail in different, often unpredictable ways and can be manipulated through language and indirect instructions. Security reviews frequently miss scenarios where models are tricked into performing harmful actions. Professionals with combined AI security and cybersecurity expertise can mitigate risks by isolating outputs, running generated code in containers, and preventing system-wide impact. The convergence of AI security and traditional cybersecurity is creating new security roles and a wave of startups to address these vulnerabilities.
Read at Business Insider
Unable to calculate read time
[
|
]