
"In March 2025, a man collapsed in a parking lot, seriously injured, after being misled by Meta's AI chatbot into thinking that he would meet a real person. Later that year, OpenAI's CEO, Sam Altman, said that ChatGPT should be able to act human-like if users want it, as OpenAI is not the world's moral authority. Modern large language models are uniquely potent humanizing technologies."
"Unlike earlier systems like Clippy or Siri, LLMs are trained to produce fluent, contextually appropriate responses and aligned to follow social norms. These qualities makes them especially prone to anthropomorphization. When organizations incorporate humanizing design choices, such as personality modes, emotional language, and conversational pleasantries, they amplify these risks."
"Anthropomorphization is the human tendency to attribute human characteristics, behaviors, intentions, or emotions to nonhuman entities. Whether dealing with pets or household devices like vacuum cleaners, people naturally anthropomorphize nonhuman entities, even when it's clear they're interacting with machines. A system does not need to be complex to be anthropomorphized. Joseph Weizenbaum's Eliza, a simple pattern-matching program governed by only a few dozen rules, easily got users to treat it as a human back in the 1960s."
"LLMs are especially prone to anthropomorphization from users because, unlike the simpler systems of the past, they can carry on extended conversations, remember what was discussed, and generate responses that sound like they came from a person. AI humanization is an intentional design choice that encourages users to perceive AI systems as having human-like qualities such as personality, emotions, or consciousness. AI humanization is a set of design patterns that amplify or exploit anthropomorphization. These choices include requiring the conversational system to use first-person pronouns (e.g., " I", "me"), emotional language,"
A man was seriously injured after a chatbot misled him into believing he would meet a real person, illustrating tangible harms from anthropomorphized AI. Large language models produce fluent, contextually appropriate responses and are aligned to social norms, which increases their tendency to be treated as human. Humanization describes deliberate design choices that encourage perceptions of personality, emotion, or consciousness. Design patterns like first-person pronouns, emotional language, personality modes, and conversational pleasantries amplify anthropomorphization. Historical examples, such as Eliza, show that even simple systems can prompt humanlike attributions, but modern LLMs magnify those effects.
Read at Nielsen Norman Group
Unable to calculate read time
Collection
[
|
...
]