A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots
Briefly

A Wikipedia Group Made a Guide to Detect AI Writing. Now a Plug-In Uses It to 'Humanize' Chatbots
"On Saturday, tech entrepreneur Siqi Chen released an open source plug-in for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called Humanizer, the simple prompt plug-in feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plug-in on GitHub, where it has picked up more than 1,600 stars as of Monday."
"Chen's tool is a " skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt."
Humanizer is an open-source prompt plug-in for Anthropic's Claude Code that applies 24 language and formatting constraints derived from Wikipedia editors' list of chatbot giveaways. The constraints are provided as a standardized "skill file" formatted for Claude Code, which appends explicit written instructions to prompts and is interpreted more precisely than plain system prompts. The GitHub release has received wide interest. Limited testing produced outputs that sounded less precise and more casual. The tool does not address factuality and can reduce coding performance or mislead depending on task-specific needs.
Read at WIRED
Unable to calculate read time
[
|
]