Public service media put security first when choosing AI tools
New research reveals public service broadcasters are walking a tightrope between adopting AI technology and protecting themselves from cyberattacks and political interference
New research reveals public service broadcasters are walking a tightrope between adopting AI technology and protecting themselves from cyberattacks and political interference
Public service media (PSM) are prioritising data privacy and national security when deciding which artificial intelligence tools to use in their newsrooms, according to a new industry report.
The research, led by Professor Kate Wright from the University of Edinburgh, interviewed 13 public media organisations from five continents throughout 2024 and 2025. It examined how broadcasters navigate the challenges of buying and using AI responsibly.
Security concerns dominate
The findings reveal that public service media face unique pressures when adopting AI technology. Many are at heightened risk of cyberattacks from criminal gangs, terrorist groups and hostile states — particularly Russia and China.
Those designated as "critical infrastructure" during emergencies have particularly strong security concerns, though some worry their governments could exploit these concerns to undermine their editorial independence.
The American AI dominance dilemma
More than half the AI tools used by public service media are based in the United States. This reliance creates potential problems, especially following the Trump administration's removal of risk-based AI regulation, which could leave US tech companies vulnerable to political interference.
A reluctance to share
Despite facing similar challenges, public service media are hesitant to openly discuss their experiences with specific AI tools, fearing that it could expose them to further security risks. When conversations do happen informally, lower-income organisations facing threats from authoritarian states often get left out.
The money matters
Wealthier broadcasters tend to build their own AI tools or pay premium prices to large technology companies to minimise risks. In contrast, organisations with smaller budgets often turn to local start-ups, both to save money and support domestic innovation.
Testing and piloting new AI tools proves expensive and time-consuming, particularly for organisations with limited resources. There's growing interest in creating a shared database of AI tools with guidance relevant to public service media.
The report recommends that public service media prioritise AI providers based in full democracies, regularly audit privacy policies and data storage locations, and remain vigilant that security concerns could be used to compromise their independence.
