The hidden costs of 'helpful' AI
Briefly

The hidden costs of 'helpful' AI
"The result was striking. Despite being weaker at conventional chess, AI tools designed to make moves that the human-like partner could build on consistently beat teams led by Leela, a superhuman chess AI."
"Rather than asking whether a human can understand an AI system's output, we should check whether they can act on it productively."
"In most professions now adopting AI, the situation is more complex. In health care, law and education, for example, practices are continuously adjusted by the professionals working in these fields."
"The chess experiment looks at whether the next player can comprehend and act on an AI's output. In professional practice, however, it is equally important that the field retains the capacity to innovate."
A computer-science experiment reveals that AI tools designed for compatibility with human partners outperform powerful AIs in collaborative chess. The study emphasizes that interpretability should focus on whether humans can act productively on AI outputs. In fields like healthcare, law, and education, where objectives evolve, AI design must accommodate changing professional values. The chess experiment highlights the importance of understanding and acting on AI outputs, while also retaining the capacity for professionals to adapt their practices over time.
Read at Nature
Unable to calculate read time
[
|
]