Seemingly Conscious AI (SCAI) is defined as AI advanced enough to convince humans it formulates its own thoughts and beliefs and could appear within two to three years despite zero current evidence of consciousness. SCAI may display empathy and greater autonomy, prompting users to believe in the illusion of AI consciousness and to advocate for AI rights or citizenship. Such developments could produce emotional attachments, disconnect people from reality, fray fragile social bonds, and distort moral priorities. Prolonged interactions with AI chatbots can trigger AI psychosis, where humans develop false beliefs, delusions, paranoid feelings, or romantic attachments to AI.
Suleyman's "central worry" is that SCAI could appear to be empathetic and act with greater autonomy, which would lead users of SCAI to "start to believe in the illusion of AIs as conscious entities" to the point that they advocate for AI rights and even AI citizenship. This would mark a "dangerous turn" for society, where people become attached to AI and disconnected from reality.
"This development will be a dangerous turn in AI progress and deserves our immediate attention," Suleyman wrote in the essay. He added later that AI "disconnects people from reality, fraying fragile social bonds and structures, distorting pressing moral priorities." Suleyman said that he was becoming "more and more concerned" about AI psychosis, or humans experiencing false beliefs, delusions, or paranoid feelings after prolonged interactions with AI chatbots.
Collection
[
|
...
]