The Guardian updates its AI policies around training, trust and in-house tools
"Lived experience is our unique contribution, and an authentic response is what our readers, supporters and staff expect and deserve"
"Lived experience is our unique contribution, and an authentic response is what our readers, supporters and staff expect and deserve"
The Guardian has updated its approach to generative AI, reflecting the technology’s growing influence on journalism and society. The newsroom now requires all staff to complete mandatory AI training, designed to help them understand how AI works and how to use it responsibly. This training will evolve as the technology develops.
In addition, The Guardian is developing in-house AI tools that align with its editorial standards. These include features to help write image descriptions, search archives, analyse documents, and transcribe audio—always with built-in guardrails to protect authenticity and values.
Transparency is a key part of the update: The Guardian will clearly signal any significant use of generative AI in its journalism (like so), such as for illustrative images or data analysis, using a footnote. The editorial code and AI principles have also been revised to support these changes.
This article was drafted by an AI assistant before it was edited by a human.