Credit: Jacek Dylag on Unsplash

When talking about artificial intelligence in the newsroom, there is too much focus on the technology and not enough on what it actually does. We want to help journalists, technophiles or technophobes, to explore this topic in an accessible way. So we are launching a new series that brings stories from your peers who work with editorial robots.

For journalists at Kronen Zeitung, one of Austria’s largest news publishers, the audience is very much part of the newsroom. Readers post some 500k comments on the website each month - and that is on top of the discussions on social media - providing an endless supply of user-generated content (UGC). Reporters are pushed toward stories that matter to the community.

The "Krone", as it is commonly known, was also one of the first papers in the region to publish reader letters and opinions, giving the audience a place to have their say.

But this is not just a story about community journalism and audience engagement. Half a million comments a month is way too much for a team of human moderators to sift through. With spam, abuse, graphic content and all forms of online hatred, the toll the UGC was taking on the team's mental health was putting a question mark over whether it was all worth it.

And so began the experiment with artificial intelligence-powered tools that were brought into the newsroom to help clean up the comment section. In doing so, readers would feel safer in the online space and moderators were exposed to minimal virtual vitriol.

Yesterday, one word was fine and today it’s an insult.

Peter Zeilinger, head of community at the Krone, said that the first attempts were not particularly successful. The software could not handle dozens of Austrian dialects and the ever-changing meaning of words.

"Yesterday, one word was fine and today it's an insult," says Zeilinger, adding that the moderation team constantly works with users to create or adjust rules around what is and is not allowed in the comment section.

So, about six months ago, the team ditched the first tool and started working with the Finnish company Utopia Analytics. To get the new robot moderator up and running, they gave it more than four million previous user comments and moderators' decisions it could learn from. After about two weeks of training, the robot was ready to start working on the comment section in real-time.

Today, it filters about three quarters of UGC, flagging up words or expressions that may go against community guidelines. This is a huge time-saver for human moderators, who can then just take a look at individual content and decide whether it should be allowed or deleted.

"There has to be a human who understands the different meanings of words," says Zeilinger.

"We would never change our community management team for robots, that wouldn’t be helpful."

It is not just individual words that can be problematic but also the context in which the comment was made. For instance, an innocent "this is the best thing that ever happened" will have a very different meaning next to an article about the national football team victory or a terrorist attack. Here, the robot can help too, minimising the margin of human error.

Another positive effect is that the comments that are pre-sifted by the robot can now be published in real-time instead of sitting at the back end for hours until a human moderator reviewed them. To the publisher's surprise, this increased engagement by around 25 per cent.

Although the software is not cheap, Zeilinger said it was worth the investment. The journalists now have more interesting discussions with the readers, produce more content and are more proactive instead of just ticking off the UGC that is, or is not, allowed.

"We now get so much more feedback that helps journalists improve content for the readers and understand what topics are important for people. This was particularly valuable during the pandemic."

Another advantage of the software, he added, is that it runs constantly, no matter the number of comments. This comes in handy during breaking news, especially when it involves sensitive topics like race, gender or religion.

"AI saves us a lot of exposure to hatred," he says. "[Moderation] used to be really cruel for the team, we had to vent a lot in the newsroom to get through the day."

But, like humans who create them, robots make mistakes. The team monitors errors closely and allows users to complain about 'mis-moderation', which helps them train the robot to make better decisions in the future.

"The tool is as good as you feed it," says Zeilinger, adding that working with an AI-powered moderator is a process of constant monitoring and training, and not a one-off event.

And did the robot take any jobs away at the Krone?

"Quite the opposite," he laughs, adding that the increase in audience engagement led the publisher to hire more journalists who now cover the stories that the community is asking for.

This series is supported by United Robots and Utopia Analytics. Neither of them is involved in the editorial process at any stage.

Utopia Analytics is a Finnish company that enables automated moderation of reader comments and cuts down the publishing delay. Inappropriate behaviour, bullying, hate speech, discrimination, sexual harassment and spam are filtered out 24/7 so teams can focus on moderation policy management.

United Robots AB is a Swedish technology company working in automated editorial content. The company leverages structured data to provide publishers with automatically generated content about sports, real estate, traffic, weather, local businesses and the stock market.

Free daily newsletter

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).