When tools pretend to be people
Briefly

When tools pretend to be people
"People ask AI systems for therapy, moral judgment, and legal authority. The interfaces we design invite this behavior. Think about how these systems present themselves. Conversational framing. Continuous memory across sessions. First-person responses that sound like someone talking back to you. Every one of these is a design choice. We chose them. And these choices feed a reflex humans already have: we project intention onto objects and tools."
"Anthropomorphization is the human tendency to attribute human characteristics, behaviors, intentions, or emotions to nonhuman entities. AI humanization is an intentional design choice that encourages users to perceive AI systems as having human-like qualities such as personality, emotions, or consciousness. When you write system responses in first person, the output sounds like authority. When you add polite phrasing, it implies consideration behind the words."
LLMs are being built to sound human by adding personality and emotional tone, which increases the risk that people will trust them like humans. Designers often present systems with conversational framing, continuous memory, and first-person responses, which encourage users to project intention onto the system. Anthropomorphization causes people to attribute human characteristics to nonhuman entities, and AI humanization intentionally leverages that tendency. First-person phrasing, polite wording, and programmed emotional tone convey authority, consideration, and caring even when none exists. Fluent, context-aware, and responsive behavior narrows the perceived gap between tool and entity, increasing misplaced trust.
Read at Medium
Unable to calculate read time
[
|
]