Machine learning, algorithms and natural language processing are now becoming common ways to talk about how we report, produce and distribute the news.
Although artificial intelligence (AI) can be trained to recognise faces and objects, understand languages, solve problems and produce thousands of articles from different data sets, can robots really do the job of a journalist?
Lisa Gibbs, business editor for The Associated Press explained how the publisher has been using artificial intelligence over the past four years.
"We are still at the very early stages of figuring out how to apply this technology to what journalists do," she said, speaking at the Google News Initiative last week (7 December).
"At the Associated Press, we have been experimenting with automation and AI to eliminate routine work, like video transcription, so that our journalists can focus on doing the creative and curious work."
By the end of next year, the publisher aims to have created 40, 000 stories using automated templates, primarily in business news and sports, and is now looking at the potential of image recognition software for the newsroom.
"We want to see how it can help us filter out graphic content from our image feeds, or help us identify athletes in sports photos," she said.
Indeed, the potential of this technology is vast, and other news organisations have also been experimenting in this space.
For example, Japan's national public broadcaster NHK has created anime newsreaders that can sign news for those with hearing impairment, or co-anchor the 11 o'clock news. Finnish news agency STT is translating news into English and Swedish in a matter of seconds.
"Increasingly, journalists are using algorithms to find and break news faster, investigative reporters can sift through data much faster and AI-based systems are helping us sift through the thousands of tweets and posts on social media," she said.
"But in the last four years of working with these technologies, we have learned a few lessons.
"These technologies are powerful, but they require new skills, workflows and new ways of thinking in our newsrooms. They are only as good as the data that goes into them."
For example, the QuakeBot from The LA Times published a breaking news article and tweet in 2017 on an earthquake that had happened in 1925. The algorithm that uses data from the Geological Survey to publish articles on earthquakes in real time, was confused by updates to the database of historical earthquakes.
"Bias is inherent in data sets sometimes, so maybe racial or gender bias, for example, can create new kinds of mistakes and magnify them a thousand-fold," she said.
The publisher is currently using a grant from Google to produce Verify, a cloud-based newsroom tool that will combine artificial intelligence with editorial expertise to source and verify user-generated content.
Verify can break down video frames, then search for them throughout the internet to see when, where, why and if they have been published previously or if in other iterations, the video shows something different.
"Using this tool will help us call out misinformation much faster as well as make sure we're disseminating more trustworthy, verified content," Gibbs explained.
"Bad actors are using AI to create and spread misinformation, so journalists can and must arm themselves with the same technology to combat this."
She stated the importance of applying the same editorial standards towards AI technology as we would anywhere else, and said the industry must work together to create a set of best practices.
"Robots are not the journalists of the future – they are a journalist's assistant, a very good one," she said, noting that AI will augment, never replace, the work of the world's journalists.
"We are not blind to the challenges as these technologies make it easier to trick machines and humans. Newsrooms must act now to understand this new era of computing. Trust in journalism is at stake."