
"First, have a large language model write it, then have another program humanize it. That's a curious trend I'm seeing today, and, at least to me, it's concerning. It's not because it feels scandalous or new, but because it has become oddly normalized in the context of making computer-generated text seem "more human." And these sites are widely available and promoted. Here's a quote from Humanize AI that sums up their promoted role."
"The purpose isn't subtle; it's to help users conceal the fact that a large language model was involved at all. In a way, it's not a humanizer, but a dehumanizer. At first glance, this might seem like a superficial concern. Writing has always been edited, and content has always been influenced by a wide variety of sources, some human and some not."
AI fluency increasingly disguises the absence of independent thought by making machine-produced language sound convincingly human. Specialized tools take LLM output and rephrase it to appear authored by a person, with the explicit goal of hiding machine involvement. This practice optimizes plausible authorship rather than clarity, originality, or intellectual struggle. Language's role as evidence of thinking shifts toward performance of mind, reducing signals that once revealed effortful reasoning. As emphasis moves to sounding human over thinking well, the ability to discern genuine thought and authorship can erode, affecting trust and accountability in communication.
Read at Psychology Today
Unable to calculate read time
Collection
[
|
...
]