New research indicates that popular AI assistants are frequently misrepresenting news content, raising concerns about public trust and the integrity of information. A comprehensive study found that a significant percentage of responses contained errors, with some AI models exhibiting particularly poor performance in sourcing and accuracy.
Key Takeaways
Nearly half of AI assistant responses to news-related queries contained significant issues.
A substantial portion of responses suffered from sourcing errors, including incorrect or missing attribution.
Accuracy problems, such as outdated information and factual inaccuracies, were also prevalent.
The study highlights potential risks to public trust and democratic participation.
The Scope of the Problem
A joint study by the European Broadcasting Union (EBU) and the BBC, involving 22 public service media organisations across 18 countries, analysed 3,000 responses from leading AI assistants like ChatGPT, Copilot, Gemini, and Perplexity. The research assessed accuracy, sourcing, and the ability to distinguish between fact and opinion across 14 languages.
Overall, 45% of the AI responses exhibited at least one significant issue, with 81% showing some form of problem. A third of the responses were found to have serious sourcing errors, such as missing or incorrect attribution. Accuracy issues, including outdated information, were present in 20% of all responses.
Specific Examples of Inaccuracies
The study cited several examples of AI misrepresentations. Gemini, for instance, reportedly made incorrect statements about changes to laws concerning disposable vapes. ChatGPT was found to have reported Pope Francis as the current Pope several months after his death. Other instances included AI assistants misidentifying current political leaders or misrepresenting details of legal cases and health advice.
Gemini, Google's AI assistant, showed a particularly high rate of sourcing issues, with 72% of its responses flagged, compared to less than 25% for other assistants. While some AI companies acknowledge the issue of "hallucinations" – the generation of incorrect or misleading information – and state they are working to resolve it, the research suggests these problems are systemic.
Impact on Public Trust
With AI assistants increasingly being used as alternatives to traditional search engines for news consumption, the EBU warns that public trust could be significantly undermined. Jean Philip De Tender, EBU Media Director, stated, "When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation." The findings underscore the need for AI companies to be held accountable and to improve the reliability of their news-related responses.
Calls for Accountability and Improvement
The research urges AI companies to take greater responsibility for how their products handle and redistribute news. The EBU and other media groups are calling for governments to enforce existing laws on information integrity and for independent monitoring of AI assistants. A campaign titled "Facts In: Facts Out" has been launched, demanding that AI tools ensure factual accuracy and proper attribution when using news content.
