AI models can mimic human responses across text, audio, and video without necessarily possessing consciousness. A subset of researchers is investigating whether models could one day develop subjective experiences and, if so, what rights those models might deserve. The concept of "AI welfare" has emerged as a field focused on potential machine experiences and protections. Some industry leaders warn that focusing on AI welfare is premature and could worsen human harms and social polarization. Other organizations, including Anthropic and researchers at major labs, are actively recruiting and building programs to study AI welfare and implement related features.
Well, a growing number of AI researchers at labs like Anthropic are asking when - if ever - might AI models develop subjective experiences similar to living beings, and if they do, what rights should they have? The debate over whether AI models could one day be conscious - and deserve rights - is dividing Silicon Valley's tech leaders. In Silicon Valley, this nascent field has become known as "AI welfare," and if you think it's a little out there, you're not alone.
Suleyman's views may sound reasonable, but he's at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic's AI welfare program gave some of the company's models a new feature: Claude can now end conversations with humans that are being " persistently harmful or abusive. "
Collection
[
|
...
]