
"Developer Siqi Chen says he created the tool, called Humanizer, by feeding Anthropic's Claude the list of tells that Wikipedia's volunteer editors put together as part of an initiative to combat "poorly written AI-generated content." Wikipedia's guide contains a list of signs that text may be AI-generated, including vague attributions, promotional language like describing something as "breathtaking," and collaborative phrases, such as "I hope this helps!""
"The GitHub page provides some examples on how Humanizer might help Claude detect some of these tells, including by changing a sentence that described a location as "nestled within the breathtaking region" to "a town in the Gonder region," as well as adjusting a vague attribution, like "Experts believe it plays a crucial role" to "according to a 2019 survey by..." Chen says the tool will "automatically push updates" when Wikipedia's AI-detecting guide is updated."
Humanizer is a custom Claude skill that removes indicators that suggest text was AI-generated, using Wikipedia's list of tells created by volunteer editors. The tool targets vague attributions, promotional adjectives, and collaborative phrases and replaces them with more specific, neutral language. Example transformations include changing "nestled within the breathtaking region" to "a town in the Gonder region" and converting "Experts believe it plays a crucial role" to a cited phrasing such as "according to a 2019 survey by..." Humanizer's GitHub states it will automatically push updates when the guide changes. AI companies are likely to adjust models to avoid similar tells, as with em-dash usage.
Read at The Verge
Unable to calculate read time
Collection
[
|
...
]