Nearly two-thirds of European cybersecurity professionals see insider threats as their biggest security risk - and AI is making things worse. A from Exabeam shows 64% now view insiders, whether malicious or compromised, as a bigger risk than external threat actors. Notably, a key factor behind this shift in focus is the use of generative AI among cyber criminals, the study found, which is making attacks faster, stealthier, and more difficult to detect. "Insiders aren't just people anymore," said Steve Wilson, chief AI and product officer at Exabeam. "They're AI agents logging in with valid credentials, spoofing trusted voices, and making moves at machine speed."
HoundDog.ai today made generally available a namesake static code scanner that enables security and privacy teams to enforce guardrails on sensitive data embedded in large language model (LLM) prompts or exposed artificial intelligence (AI) data sinks, such as logs and temporary files, before any code is pushed to production. Company CEO Amjad Afanah said the HoundDog.ai scanner enables DevSecOps teams to embrace a privacy-by-design approach to building applications. The overall goal is to enable organizations to shift more responsibility for privacy left toward application development teams as code is being written, he added.