This article was migrated from an old version of our website in 2025. As a result, it might have some low-quality images or non-functioning links - if there's any issues you'd like to see fixed, get in touch with us at info@journalism.co.uk.
Audiences still strongly value human oversight, transparency and responsible use of AI, finds a new report by the Reuters Institute. Newsrooms that prioritise clear editorial standards and human-led reporting will be best positioned to weather the AI-driven media landscape.
The Generative AI and News Report 2025 looks into how people are using - and scrutinising - AI in journalism across six markets. It finds that the public’s use of generative AI for news is rising, but trust in AI-generated journalism remains low. The public thinks other sectors - like search engines, science, healthcare - are adopting the tech more widely, but expects use for news to grow. Here's what it means for your newsroom:
Audiences are becoming more AI literate
Key finding: One third (34 per cent) of people are using AI tools weekly, double the figure from last year. Nearly two thirds (61 per cent) have used any AI system in their lives, up 21 percentage points.
So what? As public understanding grows, robust editorial standards around AI will be essential to maintain trust. The same is true for innovative use cases, as AI-literate audiences will be quicker to call out poor practices or low-quality content.
Significant ‘comfort gap’ between AI and human-led news
Key finding: Just 12 per cent of people are comfortable with fully AI-generated news, a figure which increases with more human involvement, capping at 62 per cent trust for entirely human-made content.
So what? Acceptance for AI increases with human oversight and when humans lead with AI assistance.
Most people do not regularly see AI features or labelling in news
Key finding: Three in five (60 per cent) do not report seeing audience-facing AI features, and about one in five (19 per cent) see AI labelling on a daily basis.
So what? There is a disconnect between AI implementation and public visibility. Newsrooms have an opportunity to make AI features and labelling more visible and meaningful, fostering transparency and trust.
AI use is much higher among the youth
Key finding: More than half (59 per cent) of 18-24s use AI weekly, nearly three times higher (20 per cent) than the over 55s. The popularity of ChatGPT explains much of the trend.
So what? Younger audiences are leading in AI adoption and may be more open to innovative use cases. On the other hand, older audiences may need more support and reassurance about editorial standards.
Key finding: More people use AI to get information (24 per cent) than to create media (21 per cent). It was the reverse picture last year. Researching topics is the most popular activity.
So what? There is an opportunity for newsrooms to position themselves as trusted sources within AI-driven information-seeking habits, i.e. creating their own AI-powered search engines.
Sceptical Brits
Key finding: People in the UK are amongst the most exposed to AI-generated answers (64 per cent, second highest), but are the least trusting towards those responses (40 per cent trust)
So what? Demonstrating responsible, ethical use of AI - and communicating this openly - will be crucial to winning over a more cautious and discerning UK audience.
The public thinks news is already AI-ready
Key finding: People believe AI usage is more prevalent within news (51 per cent) than the average sector (41 per cent), though this perception is more striking towards social media and search engines companies (67 and 68 per cent respectively).
So what? The optimists outweigh the pessimists when it comes to the potential benefit of AI for individual lives. Since there is a widespread perception that AI is already embedded in news production, newsrooms should be proactive in communicating how and why they use AI.
This article was drafted with the help of an AI assistant before it was edited by a human