BBC News finds that AI tools "distort" its journalism into "a confused cocktail" with many errors
Briefly

The BBC's evaluation of four generative AI tools—ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity—revealed substantial inaccuracies in their responses to news-related queries. The study highlighted that over half of the AI responses presented significant issues, including misquoted and distorted information based on BBC articles. Specific errors included incorrect statements about NHS vaping guidance, a misrepresentation of a crime victim's discovery timeline, and factual inaccuracies concerning notable individuals. These findings underscore the challenges of relying on AI for accurate news dissemination.
The testing revealed that over half of the AI-generated responses contained significant inaccuracies, with factual errors affecting 19% of the answers.
Significant issues were found in the AI responses, including altered quotes and misrepresented facts from BBC's own articles.
Read at Nieman Lab
[
|
]