Google DeepMind has issued a comprehensive paper highlighting its approach to addressing the safety of Artificial General Intelligence (AGI), which it defines as AI capable of performing any task a human can do. The paper, co-authored by co-founder Shane Legg, warns that AGI could materialize by 2030 and poses significant risks, including possible existential threats to humanity. Notably, DeepMind's methodology in managing these risks differs from competitors like Anthropic and OpenAI, with DeepMind advocating for robust training and monitoring measures over complete automation of safety research.
"[We anticipate] the development of an Exceptional AGI before the end of the current decade," the authors wrote.
DeepMind predicts AGI could arrive by 2030, which they warn could lead to severe harm and even existential risks.
Collection
[
|
...
]