Credit: ApolitikNow via Flickr

Deepfakes are an increasing concern for newsrooms with their ability to manipulate video and audio using AI technology, and turn it into something the original version never intended.

There are creative uses for the technology (like animating your ancestors or making celebrities sing Numa Numa). At the same time, the disturbing potential to create non-consensual pornography is one of many reasons why the technology is on many newsroom's radars.

At the European Journalism Centre’s (EJC) News Impact Summit on democratising data, leading tech experts and data journalists discussed what to do about this particular issue.

Deepfakes causing headaches

Sam Gregory, is a leading deep fake expert from Witness, a project that trains and supports activists around the world to use video safely, ethically, and effectively to expose human rights abuse.

He explains that there are generally the two extreme types of deepfakes: the light-hearted memes, and those with malicious intent.

"It's important to understand both," says Gregory. "The animated single image is the directionality of travel on a lot of deepfakes, which is needing less and less data to fake an image, and you can do it in easy-to-access apps like Impressions or Reface."

Somewhere in the middle of the two polar ends, deepfake technology can remove objects from the video. That is problematic for open-source investigations which have historically relied on background cues to debunk stories.

Lifelike impressions of Tom Cruise playing golf on TikTok fall into the other category, but something like that requires thousands of images, a talented deepfake artist as well as very convincing actor, and not to mention extensive post-production effects.

But news organisations are not always well equipped to deal with the technology. Take for instance a video from Myanmar which appeared to show the Yangon’s former Chief Minister, Phyo Min Thein, implicating a minster of corruption.

Despite widespread belief that the video was fabricated, it was harder to prove, and detection software could not conclusively say it was a deepfake. Technology is not that reliable, even Facebook's best algorithm created specifically to spot deepfakes is only 65 per cent accurate.

An alternative comes through 'authenticity provenance infrastructure', which essentially involves mass tracking of mainstream media on how it has been uploaded, replicated and edited.

The Content Authenticity Initiative is being spearheaded by Adobe, The New York Times and Twitter to "track the attribution, provenance and manipulation of audiovisual media for creative purposes, journalism and activism." Witness co-authored its initial whitepaper.

"They're trying to create an infrastructure where people can choose to add in signals where media was created and how it was edited, so you can see over time what has happened to pieces of media, and provide those signals to a journalist or consumer," Gregory explains.

The problem is the technical burden that places on journalists in different parts of the world. There are also safety concerns, for example, in repressive regimes if citizen journalists do not want to be identified.

The trouble with text

Videos are not the only medium at risk of being manipulated by AI. Text can also be generated using little more than a few basic words and an algorithm.

"This is one of the areas I'm quite concerned about," says Glyn Mottershead, senior lecturer in data journalism from City, University of London

He spotted a blog post on Hacker News, a social news website focusing on computer science and entrepreneurship, which was produced entirely by a text algorithm.

Curious, he punched in some bullet points into the tool copy.ai, and out spat a plausible - but untrue - account of his course description.

Automated text has upsides. Of which, news publishers are looking at "robot journalism" to outsource the menial, labour-intensive and time-consuming tasks. Left unchecked or in the wrong hands though, this could be an unwieldy and dangerous source of misinformation.

"If people are able to just generate text really quickly and easily in a black box system - so we don't know what's going on behind it - that's very concerning," he continues.

Are tools the answer?

As the saying goes, sometimes you have to fight fire with fire. Artificial intelligence itself is thought to be able to spot discrepancies between real and fake assets. But, not just are the current tools simply not reliable enough, media companies often lack the in-house expertise to use what is available - and even then, journalism lags behind other industries in terms of what is on offer.

That is according to Maria Amelie, data journalist and co-founder of Factiverse. She has seen that the worlds of PR and marketing have powerful monitoring tools - a bit like a super-powered search machine - to track how media moves on the internet.

Factiverse is working on a model powered by natural language processing to input (presumably dubious) text and generate a range of credible sources which back-up what is being claimed. It also gives an indication of the reliability on a 'supported versus unsupported' scale.

As Mottershead says though, tools cannot be relied on because they can disappear if they run out of funding, or become inviable if the prices go up. Young journalists must be able to adapt to new technology as it becomes available, meaning universities must build in basic working knowledge of these tools.

"It's about removing fear and helping people who have a love of words, communications and getting to the bottom of problems, to make that crossover into technology," he concludes.

Did you enjoy this article? Subscribe to our free daily newsletter and get our stories delivered straight into your inbox every morning.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).