As software application development teams now start to embrace an increasing number of automation tools to provide AI-driven (or at least AI-assisted) coding functions in their codebases, a Newtonian equal and opposite reaction is also surfacing in the shape of governance controls and guardrails to keep AI injections in check as these technologies now surface in the software supply chain.
The people who most need to experiment with AI-those in routine cognitive roles-experience the highest psychological threat. They're being asked to enthusiastically adopt tools that might replace them, triggering what neuroscientists call a "threat state." Research by Amy Edmondson at Harvard Business School reveals that team learning requires psychological safety-the belief that interpersonal risk-taking feels safe. But AI adoption adds an existential twist: The threat isn't just social embarrassment; it's professional survival.
In most cases, employees are driving adoption from the bottom up, often without oversight, while governance frameworks are still being defined from the top down. Even if they have enterprise-sanctioned tools, they are often eschewing these in favor of other newer tools that are better-placed to improve their productivity. Unless security leaders understand this reality, uncover and govern this activity, they are exposing the business to significant risks.