
"In theory, the robots (by which I mean generative AI) talk just like us, given that their large language models (LLMs) have been trained on billions upon billions of statements you and I have made online. That Hacker News thread in which you waxed rhapsodic about JUST HOW WRONG someone is about Flash on the internet? It's now training data for someone's LLM-augmented doctoral dissertation. The LLMs have "learned" from all this online chatter to generate text that sounds like a human being."
"Now, Cantrill is known for having strong opinions, but he's not wrong when he argues this AI-generated writing is "stylistically grating." The biggest tell? "Em-dashes that some of us use naturally-but most don't (or shouldn't)." OpenAI founder Sam Altman just fixed this last annoyance, but not before many of us realized that in our attempts to make our lives easier through AI, we inadvertently made everyone else's lives worse."
Large language models have been trained on billions of online statements and can generate text that approximates human speech, but the results often lack genuine individual voice. Training corpora include personal posts such as Hacker News threads, so LLM outputs reflect aggregated online chatter rather than unique human expression. Many readers find AI-generated prose stylistically grating, with giveaways like unnatural em-dash usage; even prominent engineers criticize the quality. Some fixes, such as adjustments by platform founders, remove particular annoyances but do not restore the distinctiveness of human-written voice. Relying on LLMs to augment writing can homogenize communication and worsen others' reading experiences, so emphasis must return to authentically expressive human writing.
Read at InfoWorld
Unable to calculate read time
Collection
[
|
...
]