
"A new international study coordinated by the European Broadcasting Union (EBU) and led by the BBC shows that AI assistants distort news content nearly half of the time. The research included 22 public service broadcasters in 18 countries and 14 languages. In the study, more than 3,000 AI responses from Chat GPT, Copilot, Gemini and Perplexity were analyzed by professional journalists."
"In the study, more than 3,000 AI responses from Chat GPT, Copilot, Gemini and Perplexity were analyzed by professional journalists. The results showed that 45% of all responses contained at least one serious error, 31% had inadequate or misleading source citations, and 20% contained major factual errors such as fabricated details or outdated information. Google's Gemini was the worst performer with problems in 76% of its responses - mainly due to a lack of source attribution."
An international study coordinated by the European Broadcasting Union (EBU) and led by the BBC found that AI assistants distort news content nearly half the time. The research covered 22 public service broadcasters across 18 countries and 14 languages. More than 3,000 responses from Chat GPT, Copilot, Gemini and Perplexity were evaluated by professional journalists. Overall, 45% of responses contained at least one serious error, 31% presented inadequate or misleading source citations, and 20% included major factual errors such as fabricated or outdated information. Google's Gemini had problems in 76% of responses, mainly for lacking source attribution.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]