Untitled_design_(5).jpg
Credit: Created using Magic Media, a generative AI app by Canva

Generative AI brought with it as many problems as opportunities.

Naysayers point to the dangers it poses for news and information, the safety of journalists, and the free press. Three experts tell us what impact the technology will have in 2024.

Tread carefully in an election year: Fiona O’Brien, UK bureau director, Reporters Without Borders (RSF)

RSF’s 2023 World Press Freedom Index revealed an alarming overview of the state of media freedom, with journalists in only three out of ten countries worldwide able to operate with a satisfactory degree of freedom. Concerns over the proliferation of disinformation – and the speed at which it travels – dominated discussion, as authoritarian leaders harnessed technology to extend their power. 

That trend is likely to continue in 2024, a high-stakes year which will see dozens of countries heading to the polls. In a world where information is delivered in real time, societies are deeply divided, and algorithms tend to favour the loudest -  rather than the most reliable -  voices, news organisations will have a vital role to play in trying to ensure objective, fact-checked, independent journalism cuts through. 

Here in the UK, where elections are also expected to interrupt the political calendar, the main challenges for press freedom will remain legal threats to journalists – in particular SLAPPs (Strategic Lawsuits Against Public Participation) – economic pressures, and the alarming rise in online harassment of journalists, especially women.

At RSF, we will be working with partner organisations to engage police, social media platforms and government in tackling this chilling problem – one that is likely to be particularly acute in an election year.

Journalists at risk of being impersonated: Rebecca Whittington, online safety editor, Reach plc

This year has been a year of upheaval for Meta and X/Twitter. We have seen more of a retreat in terms of journalistic safety support on those platforms; an irony since this year also saw the ratification of the long-awaited Online Safety Act in the UK. And while we have not had a general election, all of the clues are there for an imminent contest in the new year.

So, how does this translate into 2024? Well, one of the other top trends this year has been AI. I anticipate the developments in this area will continue apace and will bring with them both opportunities and challenges to the safety of journalists. Elections, ongoing international conflict and bad actors aiming to capitalise on online currency will see an increase in impersonation of journalists and their platforms in online spaces and AI is likely to assist in this activity. We will also see continued creative use of AI to validate journalists and new rules and guidance designed to protect journalistic output and identity.

We will start to see what the impact of the Online Safety Act will look like in 2024. While much of the bill was watered down and its impact on journalists reduced as a result, there were a couple of additions to the act which offer new or strengthened protections. These include laws to respond to false communications and threatening communications, both of which might offer the police new powers for evidence and arrest of those people who harass, intimidate and threaten journalists and media organisations.

However, the challenge online violence poses to free speech and a free press has not reduced and continues to adapt as technology changes.

In 2024, I worry we may see pernicious attacks on individual journalists aimed to threaten and intimidate by impersonating, harassing and stalking and also via legal threats. Therefore, the noise around online harm against journalists needs to continue. We need to align and keep pushing for this problem to be taken seriously and for a consistent and proactive response from our lawmakers and police alongside industry.

I feel there is real hope in the work being done by numerous organisations and individuals. Our challenge next year will be to keep this subject in the spotlight when we face competition for attention posed by a general election, new technological developments and continued disruption to publishing and news.



Publisher-tech company partnerships are a way forward: Gordon Crovitz, co-CEO of NewsGuard and former publisher of The Wall Street Journal

When ChatGPT was unleashed on the world, it came with the warning that it should not be trusted on factual matters. My optimistic prediction for 2024 is that the generative AI models will make good progress reducing their propensity to spread false information.

When my colleagues at NewsGuard red-teamed ChatGPT and Google’s Bard, they found that these AI models, when prompted on topics in the news, responded with false claims between 80 to 100 per cent of the time. This included repeating false claims about the supposed dangers of covid-19 vaccines and that school shootings in the US were fake, featuring child actors. We have also caught Russian, Chinese, and Iranian disinformation operations using AI models to create new false claims, including ones targeting the US and Israel.

The AI models have real incentives to reduce their "hallucinations" (made-up stuff), especially about news topics where the risk of spreading misinformation is greatest. The revenue model for AI companies is to license their tools to large companies and governments, which are the kinds of customers who have limited tolerance for falsehoods.

The issue comes down to whether the humans in charge of providing data to the machines will go beyond simply feeding the AI models with whatever has been published on the internet. The first generation of "training data" for AI was the unfiltered internet, with all its healthcare hoaxes, conspiracy theories and propaganda. 

The good news is that the machines have proven themselves smart enough to become far more accurate once they get access to trusted data, such as knowing which claims in the news are false and which sources of news are generally trustworthy and which are not.

Also encouraging, ChatGPT’s owner, OpenAI, just set a much-anticipated precedent showing that the AI companies know they need access to trustworthy news—and are willing to pay for it. OpenAI agreed to pay Axel Springer for the rights to access articles from the group’s European and American news brands as training data for ChatGPT. The AI model will also start to include links to Axel Springer’s news articles in its responses to topics in the news, giving its users access to the full articles and reassuring them that the AI model relies on trusted sources of news.

The AI models got off to a rough start, spreading misinformation widely and persuasively. But if given the chance, they can learn.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).