""They just don't work. They don't have enough intelligence, they're not multimodal enough, they can't do computer use and all this stuff," he said. "They don't have continual learning. You can't just tell them something and they'll remember it. They're cognitively lacking and it's just not working.""
""It will take about a decade to work through all of those issues," he added."
""My critique of the industry is more in overshooting the tooling w.r.t. present capability," he wrote. "The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless.""
""I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about some"
Current AI agents fail to meet practical expectations due to limited general intelligence, weak multimodal capabilities, poor computer-use skills, and absence of continual learning. Agents cannot reliably remember new information from interactions or adapt over time, reducing real-world utility. Resolving these core technical gaps and building dependable autonomous systems will likely take roughly a decade. Industry narratives often overshoot existing tooling by imagining fully autonomous entities that write code and replace humans. A more viable approach emphasizes human-AI collaboration, with agents that fetch API documentation, validate correct usage, minimize assumptions, and ask for clarification when uncertain.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]