
"Security leaders are under pressure to move quickly. Vendors are racing to embed generative and agentic AI into their platforms, often promoting automation as a solution to skills shortages, alert fatigue, and response latency. In principle, these benefits are real, but many AI-backed tools are being deployed faster than the controls needed to govern them safely. Once AI is embedded in security platforms, oversight becomes harder to enforce."
"Decision logic can be opaque, model behaviour may shift over time, and automated actions can occur without sufficient human validation. When failures occur, accountability is often unclear, and tools designed to reduce cyber risk can, if poorly governed, amplify it. Gartner's 2025 Generative and Agentic AI survey highlights this risk, with many companies deploying AI tools reporting gaps in oversight and accountability."
Artificial intelligence is embedded across nearly every layer of modern cyber security, powering threat detection, identity analytics, incident response, and automated remediation. Many organisations now treat AI-backed capabilities as baseline expectations integral to security tool operation. Rapid vendor adoption of generative and agentic AI promises automation benefits such as addressing skills shortages, reducing alert fatigue, and speeding response, but deployment is often outpacing governance controls. Embedded AI reduces visibility into decision logic, enables shifting model behavior, and can trigger automated actions without sufficient human validation. These dynamics create oversight and accountability gaps that can amplify cyber risk unless enforceable guardrails and governance mechanisms are implemented.
Read at ComputerWeekly.com
Unable to calculate read time
Collection
[
|
...
]