
"GPTZero, a detector of AI output, has found yet again that scientists are undermining their credibility by relying on unreliable AI assistance. The New York-based biz has identified 100 hallucinations in more than 51 papers accepted by the Conference on Neural Information Processing Systems (NeurIPS). This finding follows the company's prior discovery of 50 hallucinated citations in papers under review by the International Conference on Learning Representations (ICLR)."
""Between 2020 and 2025, submissions to NeurIPS increased more than 220 percent from 9,467 to 21,575," they observe. "In response, organizers have had to recruit ever greater numbers of reviewers, resulting in issues of oversight, expertise alignment, negligence, and even fraud." These hallucinations consist largely of authors and sources invented by generative AI models, and of purported AI-authored text. The legal community has been dealing with similar issues."
GPTZero identified 100 hallucinations in more than 51 NeurIPS papers and previously found 50 hallucinated citations in ICLR submissions. The availability of generative AI has contributed to fabricated authors, sources, and purported AI-authored text appearing in academic work. A surge in submissions (NeurIPS grew over 220 percent from 2020 to 2025) forced recruitment of many more reviewers, creating oversight and expertise alignment problems and instances of negligence and fraud. The legal field has similarly flagged hundreds of errant AI-attributed citations. The AI submission surge has coincided with rises in substantive errors such as incorrect formulas and miscalculations.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]