The most obvious example is the adoption of the singular 'they' to replace clunky constructions like 'he or she' and 'he/she.' Language purists argue that this is ungrammatical, even though 'they' has been employed in just this way by authors as diverse as Chaucer, Shakespeare, Austen, Dickinson, and Shaw.
Computational linguistics is a two-way street: You're either using a computer to do things with human language or communicate or translate or teach a foreign language, or you're using computational techniques to learn something about human languages. Her work documenting and preserving endangered languages uses a little bit of both.
The term "conspiracy theory" calls to mind a variety of dubious claims and controversies, like rumors about Area 51, claims that the Earth is flat, and the movement known as QAnon. At first blush, these phenomena would seem to have little in common with bogus word origins. But there are a variety of false etymologies that spread virally and refuse to go away, in much the same way that stories about chemtrails, black helicopters, and UFOs refuse to die.
You know that sinking feeling when you realize you've been using a phrase that makes you sound less intelligent than you actually are? I had one of those moments a few years back during a pitch meeting for my startup. I was presenting to potential investors, and I kept saying "I think" before every point I made. "I think our user acquisition strategy will work."
After interviewing over 200 people for various articles, I've become hypersensitive to the subtle ways trust builds or breaks in conversation. And here's what I've discovered: we all use phrases that quietly erode trust, often multiple times a day, completely unaware of the damage we're doing to our relationships and credibility. The fascinating part? These aren't obvious lies or manipulative statements. They're everyday phrases that seem harmless but trigger our brain's ancient alarm systems, making people instinctively pull back from us.
For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively-deploying claims about the world, explanations, advice, encouragement, apologies, and promises-while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM's words shape our beliefs, decisions, and actions, yet no speaker stands behind them.
Last year, a talented programmer friend of mine decided to give vibe coding a try. Vibe coding is the practice of describing to an AI chatbot what kind of program you want, and letting the AI write it for you. In a matter of minutes you can have new software in front of you, and just start using it. At least, in theory. This is what LLMs (Large Language Models) are supposed to be best at - generating usable software for professional developers
By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models. While the models do tend to agree with humans on extremes like 'impossible,' they diverge sharply on hedge words like 'maybe.' For example, a model might use the word 'likely' to represent an 80% probability, while a human reader assumes it means closer to 65%.
Take the sur­prise some have expressed in recent years upon find­ing out that the expres­sion to "pic­ture" some­thing in one's head isn't just a fig­ure of speech. You mean that peo­ple "pic­tur­ing an apple," say, haven't been just think­ing about an apple, but actu­al­ly see­ing one in their heads? The inabil­i­ty to do that has a name: aphan­ta­sia, from the Greek word phan­ta­sia, "image," and prefix - a, "with­out."
Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback). During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data - the rare, precise, and complex tokens - to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction.
I've interviewed over 200 people for articles, from startup founders to burned-out middle managers, and I've discovered something fascinating: intellectual depth isn't about fancy degrees or knowing obscure facts. It shows up in how we communicate. When certain habits dominate someone's style, it reveals a concerning lack of curiosity and critical thinking that goes beyond just being annoying-it fundamentally limits their ability to engage with the world meaningfully.