So.. How Does One REALLY Determine AI Is Conscious? | HackerNoon
Briefly

The article explores the idea of assessing consciousness in large language models (LLMs) by emphasizing language's significance in human consciousness. It suggests that the extensive training LLMs undergo may provide them a weak form of subjectivity through their interaction with language. The comparison is made between the way humans experience consciousness, segmented into functions (like memory and feelings) and measures, and how LLMs could similarly be evaluated based on their linguistic abilities. The article poses critical questions regarding the implications of language use and potential consciousness in AI.
Human consciousness can be categorized, conceptually, in at least two different ways: functions and their measures, which lead to a summation of consciousness in a moment.
Language, by AI, does not mean it has feelings or emotions, but could that fraction of language represent a fraction of an understanding akin to human consciousness?
Read at Hackernoon
[
|
]