A letter signed by over 100 AI experts, including notable figures like Sir Stephen Fry, warns of the risks of developing AI systems with feelings or consciousness. They propose five key principles for responsible AI research, such as prioritizing studies on AI consciousness and establishing constraints on these systems. The accompanying research paper highlights that the possibility of creating conscious AI cases could result in systems deserving moral consideration and stresses the necessity of guidelines, even for companies not seeking to develop sentient AI, due to the ambiguity surrounding the definition of consciousness in AI.
The principles include prioritising research on understanding and assessing consciousness in AIs, in order to prevent mistreatment and suffering.
The paper argues that conscious AI systems could be built in the near future or at least ones that give the impression of being conscious.
If powerful AI systems were able to reproduce themselves it could lead to the creation of large numbers of new beings deserving moral consideration.
Even companies not intending to create conscious systems will need guidelines in case of inadvertently creating conscious entities.
Collection
[
|
...
]