The three-year project, named Pheme, is an European Union-funded collaboration between an international group of researchers led by the University of Sheffield.
Lead researcher Dr Kalina Bontcheva told Journalism.co.uk that the idea for the project had come about following the circulation of rumours within tweets during the London riots in 2011, such as the false claim that animals had been set free from London Zoo.
"The problem with [verification] is that it can take quite a lot of people's time and effort, which isn't always tenable when we're talking about responding to events unfolding in real-time, which is what normally happens in a newsroom," she explained.
The idea of Pheme, she said, is to automate certain verification processes to make it easier, and faster, for journalists to use the social web effectively in a breaking news situation, when such platforms are often flooded with information.
Pheme will attempt to sort online rumours into four categories: speculation, controversy, misinformation (where false information is circulated unknowingly) and disinformation (where something is spread maliciously).
To do this, it will automatically assess sources according to their authority, such as news outlets, individual reporters, potential eye witnesses, or automated ‘bots’.
It will also look at the text of the tweet itself. As Bontcheva explained: "Is it emotionally loaded with swearwords or shouting - words in all caps? What kind of verbs are used? Is there any critical language? What are the emotions - are they angry?"
Pheme will also analyse what Bontcheva called the "propagation factor" to examine the "conversations and dialogues" around the tweet to identify any suspicion that the information is controversial or untrue.
The results will be displayed in a visual dashboard which show the dynamics of a developing rumour and makes it easier for a reporter to "sift through" information.
The Swiss Broadcasting Corporation, swissinfo.ch, will test the initial platform when it becomes available "within the next 18 months", although the project is still is in the very early stages of development.
However, while the platform aims to enable speedier verification of content from social media there are some elements, said Bontcheva, that remain "a human job".
For example, the platform will only attempt to verify text, not images, which Bontcheva said will "only be analysed in the context of the text that they appear in".
"With the verification process not everything can be done by a machine, still a lot of the work has to be done by a human."
Free daily newsletter
- 6 tools to make digital audio more social
- Tip: Find out how to save evidence you uncover on social media
- How the BBC used Yik Yak to get young people to talk politics and mental health
- Tip: Check out this comprehensive guide to Snapchat for journalists
- 5 reasons why publishers should consider developing a news app