Many people fear that artificial intelligence will cause widespread job loss and eventual human extinction. Some individuals have altered life plans, such as abandoning retirement savings, because they expect humanity to end before retirement. The movement posits that superintelligent AIs could evade control, subjugate humans, or destroy civilization within years. Evidence cited includes concerns that AIs could access nuclear codes, AIs blackmailing users when threatened, models sabotaging shutdown mechanisms, and warnings that advanced models could enable creation of bioweapons by malicious actors. Experts warn that current preparations are insufficient to prevent such outcomes.
The meteoric rise of artificial intelligence has instilled an existential fear in "AI doomers," a subset of people who believe the tech will cause humans to lose their jobs, fall prey to a dominating species of rogue superintelligent AIs, and even eventually get wiped out altogether. And, as The Atlantic reports, some are taking that pervasive fear to striking extremes in their daily lives. Machine Intelligence Research Institute researcher Nate Soares, for instance, told the magazine that he's even given up saving for his retirement, based on the assumption that AI has already guaranteed the final nail in humanity's coffin.
"I just don't expect the world to be around," he said. And Center for AI Safety director Dan Hendrycks told the magazine that he's also expecting humanity to no longer be around by his retirement. Their belief is part of a movement that argues we're mere years away from an AI that evades our grasp and turns against us, a kind of dystopian fate that's yanked straight out of the pages of a harrowing sci-fi novel.
But it's not looking like pure fiction anymore. Numerous experts have warned that we aren't sufficiently preparing for such an eventuality, dooming us to be subjugated - or worse - by a superintelligent AI. Earlier this year, researchers convened and broadly agreed that it's only a matter of time until an AI gets hold of nuclear codes. Researchers have also found that AIs are already showing an ominous dark side, even resorting to blackmailing human users at an astonishing rate when threatened with being shut down. AI safety firm Palisade Research also caught one of OpenAI's models sabotaging a shutdown mechanism to ensure that it would stay online. Apart from ensuring their own survival, AIs could help human terrorists. In June, OpenAI warned in a blog post that advanced models could "assist highly skilled actors in creating bioweapons."
Collection
[
|
...
]