Recent research by Apple AI has revealed that large reasoning models (LRMs) struggle significantly when confronted with complex problems, challenging current expectations for artificial general intelligence (AGI). Their paper, "The Illusion of Thinking," details how these models, like OpenAI's and others, were unable to maintain effective reasoning as problem intricacies increased. Using a controlled puzzle environment, researchers tested the models' performance and found while they managed moderately complex issues, their abilities peaked before failing at higher complexities. This indicates a significant gap between current AI capabilities and human-like cognition.
The 'thinking' ability of large reasoning models collapses under complexity, indicating that such models currently have limited intellectual potential.
In our tests, we found that while large reasoning models excelled at solving moderate complexity puzzles, they struggled significantly when faced with more intricate challenges.
Collection
[
|
...
]