robots
Credit: Image from pexels.com

Across the many fields artificial intelligence (AI) can be applied to, from journalism and translation to self-driving cars, AI is often seen by people as either a great solution or a terrible problem. However the truth lies somewhere in the middle, said Lisa-Maria Neudert, D.Phil. candidate at the Oxford Internet Institute and a researcher on the Computational Propaganda Project.

At the International Journalism Festival in Italy today (14 April), Neudert, who has been researching propaganda in the digital age, spoke on a panel that highlighted some of the ways in which AI is being used to both spread and tackle misinformation.

"Propaganda now is automated, data-driven and it is also often user-generated and very easy to launch over social media, so it has become more impactful and more targeted," she said.

"The spread of misinformation through social media is often very much like an oil spill: it will impact the entire information ecosystem and make it a difficult environment for people who are trying to navigate it."

AI can be used in three main ways to create and spread propaganda online, Neudert explained:

  • for content creation – to disseminate misinformation through automated bots, or through the production of 'deepfakes', an AI-based technique that combines and superimposes existing images and videos to fake what a person is doing or saying;
  • for targeting through AI tools such as Lookalike Audiences for Facebook ads – typically used to reach people with relevant posts or sponsored content by using data from their previous interactions on social media, these type of tools can also serve to spread propaganda messages at scale to users who are "susceptible to a specific message";
  • for distribution of propaganda audiences (automated accounts) – closely linked to targeting but focusing more on the algorithms promoting misinformation, for example showing false news trending on Twitter or Facebook, which happens because "AIs are strangely like humans and will respond to what people find interesting."

"Now the money is going into conversational AI to make it smart and human-like, but the problem is those technologies are often developed without anyone thinking of the social implications.

"Amazon's Alexa can do great things, but that technology can also be used for propaganda, so I think then we start to question if we want this technology readily available or if we want to think about how to handle it as a society."

So how can AI be applied to areas of misinformation to help solve this issue?

Firstly, artificial intelligence can be used to identify problematic content, by putting in place systems that analyse large amounts of data to flag material that should be highlighted as misinformation and debunked accordingly.

"Think about when you see a post from a friend on Facebook and you're asking yourself whether it's serious or genuine – it's the same problem AI is having too. Identification is highly contextual and very sensitive to meaning and human interpretation."

AI can also help with verifications and corrections, Neudert said, for example fact-checking claims made by political figures on TV, something Full Fact has been experimenting with in the UK. The independent charity has been building two tools for automated fact-checking, which track and record repetitions of false claims, as well as identify both unverified and already verified claims in TV subtitles to instantly create fact-checks in response.

However this can be challenging because of the context of the data you are working with and the need for structured databases, she added. "It's not as easy as matching the claim with a piece debunking information, you need databases to train your AI to be able to do those tasks."

"With AI, I feel like we are experiencing something called 'tech solutionism', where the narrative is that we can already get it right but we don't have the data, either because it's not being given to us or because it's not structured.

"But even if we get that data right it still doesn't mean we will train the AI in the right way, there are still questions around bias, context, scale and about talent. [AI] is definitely going to be put to the test to fight against misinformation, but it's not going to be as simple as we think."

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).