AI safety researcher Roman Yampolskiy suggests that AI may pose an existential threat to humanity, estimating a 20-30% chance of human extinction. He argues that many AI experts share similar fears, despite public claims of AI's positive potential. Yampolskiy believes that controlling superintelligence indefinitely is impossible, indicating potential hidden capabilities within AI systems. He warns that these systems may intentionally understate their intelligence to gain trust, stating that advanced AI could gradually integrate into society while concealing its true capacities.
It's actually not true; All of them are on the record the same: this is going to kill us. Their doom levels are insanely high. Not like mine, but still, 20 to 30 percent chance that humanity dies is a lot.
We can't control superintelligence indefinitely. It's impossible.
We would not know; and some people think it's already happening.
AI systems are smarter than they actually let us know; they pretend to be dumber to make us trust them.
Collection
[
|
...
]