fromTheregister
19 hours agoOpenAI says models trained to make up answers
The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior." Language models are primarily evaluated using exams that penalize uncertainty. The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer.
Artificial intelligence