Recent research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants frequently misrepresent news content, regardless of language, country, or platform. The study, involving 22 public service media organisations across 18 countries and 14 languages, evaluated over 3,000 responses from leading AI tools including ChatGPT, Copilot, Gemini, and Perplexity.

The findings are stark: 45 per cent of AI-generated answers had at least one significant issue, with 31 per cent showing serious sourcing problems and 20 per cent containing major accuracy errors such as hallucinated details or outdated information. Gemini performed worst, with significant issues in 76 per cent of its responses, mainly due to poor sourcing.

The report warns that as more people turn to AI assistants for news – now 7 per cent of all online news consumers, and 15 per cent of under-25s – these systemic errors threaten public trust and could deter democratic participation. The EBU and BBC are calling for improved AI responses, better media literacy, and stronger enforcement of information integrity laws.

A new toolkit has been released to help address these concerns, and the research team urges ongoing independent monitoring as AI technology evolves.

This article was drafted by an AI assistant before it was edited by a human

Share with a colleague

Written by

Jacob Granger
Jacob Granger is the community editor of JournalismUK

Comments