Wikipedia editors compiled a 'Signs of AI Writing' list identifying clichés, strange tropes, obsequious tones, and other oddities typical of AI-generated prose. Large language models frequently pontificate on niche topics despite limited factual knowledge, producing superficial or incorrect content across obscure subjects. The reliance on crowdsourced contributions and coverage of esoteric topics increases vulnerability to AI-generated low-quality edits. Financial incentives and brand reputation drive some actors to use AI to create or manipulate pages at scale, often masking self-promotional edits among many minor contributions. The combination of scale, niche coverage, and incentive structures encourages misuse of AI on the platform.
Have you ever read an article or social post and thought, "This is terrible! I bet it was written by AI!"? Most people know bad AI writing when they see it. But unless you're a closeted copyeditor, it's surprisingly hard to put your finger on exactly why AI writing sucks. Now, Wikipedia's editor team has just released what amounts to a master class in the clichés, strange tropes, obsequious tones of voice, and other assorted oddities of AI-generated prose.
As one of the internet's most trusted sources of information, Wikipedia is uniquely exposed to the risks of LLM-generated content. Large language models love to pontificate on random topics, even when they have very little actual knowledge. Wikipedia covers many of these random topics, from the ash content of Morbier cheese to the gorey details of Justin Bieber's love life. Wikipedia famously crowdsources its information through a network of volunteer contributors and editors.
Collection
[
|
...
]