#llm-detection

[ follow ]
fromNature
1 week ago

'A serious problem': peer reviews created using AI can avoid detection

A research team based in China used the Claude 2.0 large language model (LLM), created by Anthropic, an AI company in San Francisco, California, to generate peer-review reports and other types of documentation for 20 published cancer-biology papers from the journal eLife. The journal's publisher makes papers freely available online as 'reviewed preprints', and publishes them alongside their referee reports and the original unedited manuscripts.
Artificial intelligence
Artificial intelligence
fromFortune
1 month ago

MIT researcher gives advice on how to tame, harness AI 'workslop' | Fortune

Workslop—AI-generated, low-value output—reduces productivity, incurs substantial costs, and will drive organizational efforts to detect, filter, and govern AI-generated slop.
fromWikipedia
3 months ago

Wikipedia:Signs of AI writing - Wikipedia

This is a list of writing and formatting conventions typical of AI chatbots such as ChatGPT, with real examples taken from Wikipedia articles and drafts. It is meant to act as a field guide to help detect undisclosed AI-generated content on Wikipedia. This list is descriptive, not prescriptive; it consists of observations, not rules. Advice about formatting or language to avoid in Wikipedia articles can be found in the policies and guidelines and the Manual of Style, but does not belong on this page.
Artificial intelligence
[ Load more ]