Your Words Subtly Shape ChatGPT's Recommendations
Briefly

Phrasing, word choice, dialect, and cultural references significantly influence language model outputs, often altering tone, content, and advice. Formal English and slang or regional dialects can yield markedly different recommendations for identical questions. UC Berkeley research found non-"standard" English varieties, including African American Vernacular English and Scottish English, elicited notably more stereotyping, demeaning content, and condescension than "standard" English. Different AI systems display political leanings—ChatGPT tilts liberal, Perplexity conservative, and Gemini more centrist—so wording can trigger divergent political framings. Women seeking career guidance face particular risks of biased or gendered advice.
Have you noticed how ChatGPT seems to "get" you better some days than others? It's not your imagination. The way you phrase your questions - your choice of words, your dialect, even your cultural references - is quietly steering the AI's responses in ways you probably never realized. Think about it: Ask ChatGPT for career advice in formal English, then try the same question using slang or a regional dialect. The recommendations you get back might be surprisingly different.
ChatGPT treats different varieties of English very differently. Researchers at UC Berkeley discovered that if you speak African American Vernacular English, Scottish English, or other non-"standard" varieties, ChatGPT is more likely to respond with stereotypes, condescending explanations, and misunderstandings. The numbers were stark: 19% more stereotyping, 25% more demeaning content and 15% more condescending responses compared to "standard" English. Imagine asking for job interview tips and getting subtly different advice just because of how you naturally speak.
Read at Psychology Today
[
|
]