The volume of information posted to social media is growing every day, but newsroom resources to source and verify eyewitness media are not allocated at the same rate.
Verifying eyewitness media is often a laborious process that takes time - but what if it could be automated, if not entirely, at least partly in order to make journalists’ jobs easier?
Automating verification was the focus of a panel at the International Journalism Festival in Italy yesterday (7 April), where experts discussed the initiatives already in place as well as the potential of automated fact-checking in the future.
“Regardless of how much we talk about verification... every time there is a major event, scandal, national disaster, we’re seeing the same content being shared,” said Sam Dubberley, co-founder of Eyewitness Media Hub.
So wouldn’t it be great to have a program that would flag up fakes at once and avoid embarrassing retractions from your organisation?
There are some experiments already happening around the world. Dubberley has been working with Sourcefabric to build Verified Pixel, a platform that automated much of the work that goes into image verification.
Verified Pixel brings many helpful verification tools, such as TinEye, Google reverse image search or trawling through EXIF data into one place. ”Anytime [an] image comes into a newsroom, these tests are run in a standardised manner.”
The Reveal Project is a European initiative, co-funded by the European Commission, to make social media verification more efficient.
Jochen Spangenberg, innovation manager for Reveal and Deutsche Welle, explained the Reveal platform is looking to automate parts of the process, and has its sights set particularly on two aspects: fostering collaboration, and coming up with algorithms that can “detect fakes and manipulation in images using a variety of technologies”.
Other promising technologies are being developed elsewhere, such as face recognition software becoming increasingly efficient, or initiatives such as TensorFlow.
“As you feed more and more images in, it learns from every time you add a new image,” explained Douglas Arellanes, co-founder of Sourcefabric, who works on Verified Pixel.
“Where I think this can be applied is in the area of traumatic images.” Journalists working with graphic images on social media have identified surprise, or being unprepared to view graphic images, as one of the things that would influence how traumatic the experience is.
“[We could] train a system to say 'here are gory images': warn me before you show me this.”
He also highlighted the Truthy project as an important step forward - one of the tools built as part of it is able to identify whether a certain account on Twitter is run by an actual human being or is in fact a bot. This could come in handy when trying to identify the source of an image or a video online.
Advances are also being made by fact-checking teams, whose work is notably meticulous and time-consuming.
'Rather be right than first'
A project called ClaimBuster helps fight against the tidal wave of information out there by “finding checkable claims, verifiable factual statements,” explained Mark Stencel, co-director of the Duke Reporters' Lab. .
“ClaimBuster analyses text and this algorithm, this process has been trained well enough to figure what actually sounds like a factual claim.” This means the fact-checkers can move on to the verification process quicker.
“The challenge for fact-checkers is that it’s a competition between the need for accuracy and speed.
“One of the reasons that so many of the journalists in the fact-checking community are interested in automating... is to accelerate the process of identifying and verifying claims to give us a leg up. But we would always rather be right than first.”
So what could the future look like? Perhaps there will be a fact-checking element to posting on social media, where an alert could pop up if someone was about to share a fake image or a factually incorrect claim.
But Spangenberg believes the process should not be entirely automated, at least not for the foreseeable future. “There will always be a human making the decision. Someone has to be responsible.“
Free daily newsletter
- How do narratives spread, and what does this mean for how we report the news?
- The Washington Post's pandemic-born visual forensic team is here to stay (and still works remotely)
- Nadine Ajaka of The Washington Post on the visual forensics team
- Tip: Verifying information during war
- Tools and resources for journalists covering the Ukrainian war