Open-source platform Check has developed an automated workflow for fact-checking teams to make responding to mass verification requests easier during election campaigns.
What makes Check different to fact-checking projects run by other collaborative teams and news organisations around the world is that once a tip is verified, that status is logged in a database developed by journalism technology non-profit Meedan.
Machine learning and natural language processing is then used to respond automatically to any similar and duplicate tips recognised by Check, allowing fact-checking teams to concentrate on fresh leads.
Tom Trewinnard, director of programs, Meedan said that this was a huge problem during the Verificado project, which was a collaborative initiative during the Mexican election 2018 involving Animal Político, AJ+ Español, and Pop-Up Newsroom.
"We found it to be very manual, we had many requests for the same thing to be checked and it was very labour-intensive for a journalist to keep responding to every request," he said.
"We were copy-pasting every fact-check we had done into individual users and that was limiting the amount of claims we could check because we had to dedicate people to simply responding."
During this year's Indian elections, Check piloted its automated responses and integrated an 'omnichannel' called Smooch into the workflow.
This means fact-checking teams could receive public tips not just from WhatsApp but from other messaging platforms, such as Telegram, Twitter DM, Facebook Messenger, WeChat, and also Slack for manual messaging.
The public were able to send in any media they wanted verified - more than 70,000 links, claims and memes were submitted in total over the eight-week period.
Here is how it works: all users who send in a near-similar request are shown how teams determined whether it is true, false or misleading. This explainer comes with an image card showing a confirmed status which is designed for social media use.
"In the same way that images go viral on private messaging apps, the goal is to create image assets that can be shared and forwarded, containing the fact-checked information," Trewinnard said.
But what if fact-checkers get it wrong? Automated processes could end up compounding misinformation if false verifications are sent out to the public and cannot be clawed back.
This is a real possibility for both manual and automated response teams. The upside of an automated information is that it is quicker and easier to correct it - all users who initially requested the verification can be sent a notification as soon as a correction is made with a link to the new status and image assets.
"If a team was doing this work on WhatsApp without Check and made a mistake, they would have to manually issue a correction to every user who they had sent a fact-check to," he explained.
"This would likely be a massive challenge because using the WhatsApp Business App (as we did in Mexico) it's very hard to track even basics like who has been sent what. It's all completely manual."
Memes are taken seriously too, as they present a real threat as a form of misinformation. This has been demonstrated with the memes surrounding former presidential candidate in Mexico, Ricardo Anaya, during the Verificado project in 2018.
"In general principles, memes contain a simple message overlayed with some visual image that might be miscontexualised or the text of the meme might contain misinformation - and that’s a common occurrence," Trewinnard said.
The next challenge is the USA 2020 election. Even though that is not a big WhatsApp market, he audiences should be especially vigilant against visual content they might come across on the platform and elsewhere on social media.
"A lot of what we see are recycled video and images from past events and other countries that are used to claim to show something else, and even politicians do this.
"Even Donald Trump made a campaign around US immigration which was from a video in Morocco and had nothing to do with the US. Visual content in particular can be stripped of its original context very easily," he concluded.
Join our workshop that looks at the latest techniques used to produce fake news material and discuss the best practices for news verification at Newsrewired on the 27 November at Reuters in London. Head to newsrewired.com for the full agenda and tickets