AI meets game theory: How language models perform in human-like social scenarios
Briefly

Researchers from Helmholtz Munich and other institutes found that while large language models like GPT-4 perform well in logical reasoning tasks, they face challenges in social intelligence. Specifically, they struggle with tasks requiring teamwork, compromise, and trust, highlighting a gap in their ability to understand social dynamics. Using behavioral game theory, the study assessed how these models function in social interactions, revealing their limitations in cooperation despite strong performance in self-interest scenarios.
In some cases, the AI seemed almost too rational for its own good," said Dr. Eric Schulz, lead author of the study. "It could spot a threat or a selfish move instantly.
The researchers discovered that GPT-4 excelled in games demanding logical reasoning -- particularly when prioritizing its own interests.
Read at ScienceDaily
[
|
]