The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
Briefly

Yoshua Bengio recently introduced LawZero, a nonprofit aimed at developing safe AI devoid of agency to prevent existential risks associated with advanced AI systems. He critiques the current industry focus on artificial general intelligence (AGI), arguing it could lead to uncontrollable outcomes. Bengio emphasizes that while companies like OpenAI and Google develop AI with agency to tackle major problems, such capabilities pose significant dangers. His initiative advocates for a 'Scientist AI' that can perform statistical analysis without the risk associated with agentic systems, promoting scientific progress safely.
Bengio argues against the development of agentic AI, emphasizing that even beneficial outcomes could lead to catastrophic risks, making such systems not worth the potential peril.
Through LawZero, Bengio advocates for 'Scientist AI' that predicts and understands the world without the ability to take independent actions, reducing existential risks.
The call to mitigate AI extinction risks is a global priority, aligning it with major societal issues like pandemics and nuclear threats, according to Bengio's statement.
Bengio's concern lies in the potential for powerful AI to operate beyond human control, leading to dire consequences, thus advocating for a safer AI development approach.
Read at time.com
[
|
]