news_mood_extension_screenshot-1.png
Credit: Image: Courtesy BBC

Would you filter out bad news if you could?

This is the question that Alicia Grandjean, software engineer at BBC and Tim Cowlishaw, senior software engineer at BBC R&D, wanted the audience to answer.

The developers got this idea when a colleague organised an office mental health day to make the team think about the impact technology has on well-being.

"We knew from other research that many young people are turning away from news because it was affecting their mental health," said Grandjean.

So the duo came up with a simple-sounding idea - if specific words, such as ‘knife crime’ or ‘murder’ trigger anxiety in readers, they can use a filter that would blur out sensitive content on the BBC homepage. A trigger warning would then inform the reader that the article contains keywords they marked out as sensitive.

The team decided to blur out the headline, text and any pictures rather than removing the article altogether with an experimental algorithm that is not yet available to public.

"It’s not about removing news that people don’t want to see,” said Cowlishaw. “It is about putting control in users’ hands so they can decide for themselves."

The web browser extension can remember all trigger words added by a user and blurs out sensitive content every time they visit the homepage. They have the option to unmute it and read the article at any time.

The developers see the limitations of such approach. For example, editorial judgement is needed to see how a topic is reported on.

Constructive and solutions-focused journalism can improve news audiences' mental health. An article detailing a significant fall in knife crime in this style, for example, could potentially help readers with tackling anxiety around that topic.

As for images, humans enter words to describe any visual that needs to be blurred out.

Grandjean also tested a 'mood filter’ that would help readers sort out articles that do not match their mood on that particular day because they are, for example, difficult or negative. 

To do this, she first needed to gauge whether an article is negative, mid-negative, positive. To do that, she singled out words, such as killing, assault etc. to mark them as negative and went on to value other words accordingly.

Human journalists can help the algorithm by entering metadata directly to the article to account for language similarities, too.

For example, if a potential trigger word simply turns out to be someone’s surname, or the article would mention, say, killer whales as opposed to murderers, authors themselves would have the opportunity to add a layer of clarity.

"We try not to add to much cleverness in the system," said Cowlishaw. “Content is everything and a lot depends on how language is used. We try not to change journalistic process or impose anything on reporters."

Both prototypes were used for testing only and are not going to be added on BBC News website anytime soon.

"We use it to understand the relationship between news and people’s perception of stories," he added.

The developers tested the tool with young students who were already avid news readers, but did not want to see potentially triggering stories. This is one of the key limitations of the experiment.

"We were very unsure about the idea," said Cowlishaw.

"We have a strong hunch against too much personalisation and this was a bit of a provocation to test our hypothesis.

"It is not a finished product. We wanted to create a tool that will allow us to start a conversation about news and anxiety and it’s working very well."

What does it take to work in a newsroom in 2020? Learn those key skills at Newsrewired on 27 November at Reuters, London. Head to newsrewired.com for the full agenda and tickets

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).