It's starting to look like we'll never come up with a good way to tell what was written by AI and what was written by humans | Fortune
Briefly

It's starting to look like we'll never come up with a good way to tell what was written by AI and what was written by humans | Fortune
"People and institutions are grappling with the consequences of AI-written text. Teachers want to know whether students' work reflects their own understanding; consumers want to know whether an advertisement was written by a human or a machine. Writing rules to govern the use of AI-generated content is relatively easy. Enforcing them depends on something much harder: reliably detecting whether a piece of text was generated by artificial intelligence."
"Some studies have investigated whether humans can detect AI-generated text. For example, people who themselves use AI writing tools heavily have been shown to accurately detect AI-written text. A panel of human evaluators can even outperform automated tools in a controlled setting. However, such expertise is not widespread, and individual judgment can be inconsistent. Institutions that need consistency at a large scale therefore turn to automated AI text detectors."
"The problem of AI text detection The basic workflow behind AI text detection is easy to describe. Start with a piece of text whose origin you want to determine. Then apply a detection tool, often an AI system itself, that analyzes the text and produces a score, usually expressed as a probability, indicating how likely the text is to have been AI-generated. Use the score to inform downstream decisions, such as whether to impose a penalty for violating a rule."
AI-written text creates practical challenges for teachers, consumers, and institutions that need to verify authorship. Writing rules for AI-generated content is straightforward; enforcing those rules depends on reliably detecting whether text was machine-generated. Human evaluators can sometimes detect AI-written text accurately, especially experienced AI-tool users, and panels can outperform automated tools in controlled settings, but widespread expertise is lacking and individual judgments are inconsistent. Institutions therefore rely on automated detectors. AI text detection systems score texts for likelihood of machine generation, but reliable use requires answering background questions about tools, access, and quantity of text.
Read at Fortune
Unable to calculate read time
[
|
]