Agentic AI is complex, not complicated
Briefly

Agentic AI is complex, not complicated
"Writing Python code is complicated; managing Python programmers is complex. Editing a video is complicated; making a video go viral on YouTube is complex. Compiling a C program is complicated; doing a YOLO run when training a base model is complex. DNS lookups are complicated; running a registrar is complex. Registering CVEs is complicated; predicting how a hacker will use a CVE is complex."
"Think statistical: The outcomes in our lives feel deterministic, but they're not. When you back up and analyze human populations at scale, our decisions are statistical in nature, such as how a certain percentage of people will vote one way or another in an election."
"The process large language models (LLMs) use to drive agents is also statistical in nature, but the outcomes are less precise than they would be with humans, so you have to check the work-or, better yet, write another agent to check the work for you. (Yes, that can work.)"
Complex systems differ from complicated tasks: predictable procedures are complicated while unpredictable, interacting agents create complexity. Autonomous AI agents committing code introduce systemic risk beyond technical redesign, requiring statistical thinking, factor-focused design, and layered agent roles. Treat model-driven outcomes as probabilistic and build verification agents or checks to validate results. Design for factors rather than individual events, assigning specialized agent roles (senior engineer, junior engineer, quality engineer, auditor) to distribute responsibility and detect collusion. Implement monitoring, auditing, and governance to manage emergent behaviors, reduce risk, and harness benefits of agentic automation.
Read at InfoWorld
Unable to calculate read time
[
|
]