At Newsrewired next month, we will hear from two industry experts about the latest techniques that are adopted to produce misleading and false content, equip you with the tools and advice to tackle misinformation, and best practice for verifying user-generated content online.
Joining us for that panel is Hazel Baker, global head of user-generated content newsgathering at Reuters, who directs a team of reporters dedicated to sourcing, verifying and clearing UGC for distribution to clients as quickly as possible.
As part of her research into the phenomenon of ‘deepfakes’, AI-based technology to alter video content, Baker and her colleagues created their own deepfake to better understand what goes into their production and use it as an example of what red flags may appear in the verification process.
Baker spoke to Journalism.co.uk about content verification, user-generated content and the threat that deepfakes pose to quality journalism in the years ahead.
Q You are speaking on a Newsrewired panel discussing how quality journalism can thrive in an age of disinformation. For those who are unfamiliar with your work, what is your connection to this topic?
I lead a team at Reuters who verify third-party material. It is massively important to our storytelling because we have to be extremely careful with anything that is not filmed by a trusted journalist.
We spend a lot of time verifying pieces of video and imagery that come to our desk and through this work, we do encounter material that is not authentic.
Sometimes it is disinformation and sometimes it is misinformation - we do not always know to be honest when we encounter it whether it is being deliberately shared to mislead or whether it is accidental. Other times we do not have time to look into that aspect of it, but we are definitely aware of it on a daily basis.
Q What can delegates at the conference look forward to hearing about from you?
The key takeaway I am hoping to deliver at the conference is that the best single way that I have found to arm yourself against disinformation in this environment is to learn everything you can about your enemy.
It is really about understanding the types of misleading visuals. At one end of the scale are deepfakes and Reuters created a deepfake video.
I will be talking a bit about that experiment and why we did it, and I will also show how, although we have seen a lot of fearmongering headlines about deepfakes, they are actually just the latest iteration of fake video that we have encountered before.
Q What sort of challenges does user-generated content pose for journalists when reporting breaking news events?
Very often we see dramatic pictures circulating quickly on social media but they are often lost from their source. They are scraped, copied, and without speaking directly to the source, we cannot establish its authenticity.
So, although we can see at times some really important footage, we do not distribute it on the Reuters wire until we manage to trace it back to the source and ask the questions that we need to so that we feel happy that it is an authentic portrayal of events as they occurred.
Other difficulties are that the volume of material can be quite high, particularly for globally interesting events. Sifting through that to find original sources and original material is challenging and time-consuming. The social networks are not always the easiest to search and going back chronologically can be tricky.
We need to find the first people that share material on social media, which can be easier said than done, and then once we find the person who filmed it, getting hold of them in a timely manner and also winning their trust, which is not a quick process. That can be a challenge but one that we think we are well placed to overcome.
Disinformation is so easy to generate and spread that it's important for every journalist to understand basic verification skills. Read our new guide out this week: https://t.co/EBieYpcT34 pic.twitter.com/DM8uHL33z2— First Draft (@firstdraftnews) October 17, 2019
Q Are the problems raised by disinformation only going to get worse with the rise of deepfakes?
I think that the challenge of misinformation is only going to get more significant and I think that is probably true for every newsroom in the world. The means by which people can share information are increasing and definitely deepfakes have the potential to generate content which is even harder to detect.
That means we have got to invest properly in this area and make sure that we are well equipped to face that threat.
I started to research deepfakes a year ago and I had very few examples to work from, which is one of the key reasons why we made our own. Fast-forward nearly twelve months and there are certainly way more examples out there. The latest report from DeepTrace suggests there are 14,600 deepfakes in the wild. It is going to be an issue in the mainstream consciousness very soon.
Most deepfakes weren’t created to mess with elections. https://t.co/F0dNgUaL5R— MIT Technology Review (@techreview) October 16, 2019
The other aspect to note is, even though we have not had a properly explosive story that has developed from deepfakes, we are starting to see members of the public question whether authentic video could be deepfaked. If we get to that point, where people are worried whether real footage is deepfaked, it has got to be a topic we treat extremely seriously.
Q Is there any light at the end of the tunnel for journalists trying to fight against misinformation?
The light at the end of the tunnel is more and more newsrooms are committing resources to verifying material and certainly, at Reuters we have grown our team in UGC over the last few years. We work very collaboratively with our colleagues at different bureaus because there is value in having a good verification process understanding where various knowledge fits in.
Collaborative working is the key to all of this: if you could speak to local experts and people who understand the context of the event that its showing, then you are in a better place to verify video. We are also seeing researchers in academic labs looking into how technology can help deepfake detection.
Now, technology is not going to be the golden answer. But I think the combination of working with those in the AI industry and using our newsroom expertise is probably going to generate a strong force to tackle this problem.
Totally agree with this point 👉— Hazel Baker (@HazelBakerNews) June 24, 2019
“This constant appeal to a near-future of perfectly streamlined technological solutions distracts and deflects from the grim realities we presently face.”https://t.co/7OAfxwuTP7
Q Which other panel are you most looking forward to at Newsrewired?
I am particularly looking forward to Yusuf Omar’s talk. The future of participatory journalism, especially in the age of 5G, is a topic which I find exciting and full of possibility.
Check out Hazel Baker’s panel and much more at our Newsrewired conference takes place on 27 November at Reuters, London. Head to newsrewired.com for the full agenda and tickets
Free daily newsletter
- Newsrewired to offer 40 virtual places to local journalists thanks to support from the Google News Initiative
- How broadcasters and the government can prepare young people for the next 'infodemic'
- How to fight mis- and disinformation during the coronavirus crisis
- Google is giving $6.5 million to fact-checkers focusing on coronavirus
- International Fact-Checking Day: eight resources for verifying information