Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness | TechCrunch
Briefly

Anthropic revises Claude's 'Constitution,' and hints at chatbot consciousness | TechCrunch
"For years, Anthropic has sought to distinguish itself from its competitors via what it calls " Constitutional AI," a system whereby its chatbot, Claude, is trained using a specific set of ethical principles rather than human feedback. Anthropic first published those principles- Claude's Constitution -in 2023. The revised version retains most of the same principles, but adds more nuance and detail on ethics and user safety, among other topics."
"When Claude's Constitution was first published nearly three years ago, Anthropic's co-founder, Jared Kaplan, described it as an "AI system [that] supervises itself, based on a specific list of constitutional principles." Anthropic has said that it is these principles that guide "the model to take on the normative behavior described in the constitution" and, in so doing, "avoid toxic or discriminatory outputs.""
Anthropic released a revised version of Claude's Constitution, a living document that explains the context in which Claude operates and the kind of entity Claude should be. The release coincided with the CEO's appearance at the World Economic Forum in Davos. Constitutional AI trains Claude using a specific set of ethical principles instead of relying on human feedback. The revised Constitution preserves most original principles while adding nuance and more detail on ethics and user safety. The principles are intended to guide the model toward normative behavior and to avoid toxic or discriminatory outputs, reinforcing Anthropic's ethical positioning among AI companies.
Read at TechCrunch
Unable to calculate read time
[
|
]