This article was migrated from an old version of our website in 2025. As a result, it might have some low-quality images or non-functioning links - if there's any issues you'd like to see fixed, get in touch with us at info@journalism.co.uk.

A report lands in your inbox. It's got survey data from 10,000 respondents, compelling case studies, expert quotes, and clear editorial angles. It looks publication-ready. That should be your first warning sign.

This week, we nearly published exactly that kind of story. It had everything: authoritative research, tragic cases, quotable experts. It was only during final edit that the red flags became impossible to ignore, and we killed it. What looked like research was actually sophisticated advocacy, and it almost worked.

Sophisticated advocacy organisations have evolved far beyond crude press releases. They now produce research that mimics independent analysis, package it with emotional hooks, and deliver it in formats designed to slip seamlessly into editorial workflows.

The new advocacy playbook

In our case, it was a recent report from Seismic Foundation - "On the Razor's Edge" - about public attitudes towards AI. At first glance, it has everything: 10,000-person survey across the US and Europe, high-profile lawsuit cases involving teen suicides and AI chatbots, quotable experts warning that public mood "teeters on a razor's edge."

The report claims that "public attitudes towards artificial intelligence are more polarised and fragile than ever" and that "it could only take one big AI moment for public mood to shift one way or the other." It features heartbreaking cases like Sewell Setzer III, a 14-year-old whose mother filed a wrongful death lawsuit against Character.AI, alleging the chatbot contributed to her son's suicide. It identifies five distinct groups "primed to shape the AI debate." It concludes by telling newsrooms they "must adopt a more cautious and considered approach within their reporting."

Compelling stuff. Except Seismic openly describes itself as using "media campaigns to raise public awareness around AI security and rally people behind the cause." Their website explains they create "highly engaging campaigns" using "carefully tested messaging" and "cut-through creative" to "drive action." The report quotes their own "strategy director" as an expert source. Those "five groups primed to shape the AI debate"? That's not sociology but audience segmentation for activist recruitment.

Yet reports like this routinely get covered as if they were independent analysis. The chances of them getting covered increases when they press the right buttons, advocating for something close to your heart or playing to your own fears and insecurities. So, here's what to watch for and how to push back.

Red flag #1: Research as Trojan horse

What it looks like: Professional surveys with large sample sizes, data visualisations, downloadable reports. All the trappings of academic research.

What to ask:

  • Who commissioned this research and what's their stated mission?
  • Is the methodology publicly available? Can you see the actual survey questions?
  • Has this been peer-reviewed or validated by independent researchers?
  • Who designed the survey questions? (Framing matters enormously)
  • What's the margin of error and confidence level?
  • Were there any questions that didn't support the organisation's position? Why aren't those highlighted?

Warning signs: Organisations whose mission statements include phrases like "raise awareness," "drive action," "rally people," or "create urgency." If they describe using "tested messaging" or "storytelling," you're looking at strategic communications, not research.

Red flag #2: Emotional case studies positioned as evidence

What it looks like: Tragic, real-world examples that anchor the narrative, often involving vulnerable populations like children. These cases are typically newsworthy on their own, which is precisely why they're effective in advocacy campaigns.

What to ask:

  • Are these isolated incidents or representative of a broader pattern?
  • What's the base rate? (If millions use a service safely, how much weight do rare tragic cases carry?)
  •  Are you presenting allegations from active lawsuits as established facts?
  • Have you included responses from the organisations being accused?
  • Are there cases that cut against the narrative? Why aren't those included?

Warning signs: When the emotional case is placed before any data or context. When counter-examples are absent. When company responses are missing or buried.

Red flag #3: Experts who aren't independent

What it looks like: Authoritative-sounding titles: "Strategy Director," "Research Lead," "Senior Fellow." They provide quotable analysis that reinforces the report's conclusions.

What to ask:

  • Does this person work for the organisation releasing the research?
  • What's their actual role? (Is "Strategy Director" a euphemism for "person who develops our media campaigns"?)
  • Are they being paid by organisations with a position on this issue?
  • Can you find genuinely independent experts who aren't affiliated with advocacy groups on either side?

Warning signs: All quotes come from people affiliated with the sponsoring organisation. No critical perspectives included. Sources are framed as neutral experts rather than advocates.

Red flag #4: Prescriptive conclusions

What it looks like: The research doesn't just report findings, it tells you what should happen next. Policymakers "must act." Companies "need to" do something. Journalists "should" cover issues differently.

What to ask:

  • Is this reporting findings or advocating for outcomes?
  • Does the research actually support these prescriptive conclusions?
  • Are there alternative interpretations of the data that would lead to different recommendations?
  • Who benefits if these recommendations are adopted?

Warning signs: Language like "must," "should," "need to" in the report itself. Recommendations that align perfectly with the organisation's stated advocacy goals. Research that tells your profession how to do its job.

Red flag #5: Activist framing disguised as analysis

What it looks like: The research creates categories, typologies, or segments that sound analytical but are actually designed to mobilise supporters. "Five groups primed for action." "Key audiences ready to engage."

What to ask:

  • Is this describing existing social groups or constructing identities to facilitate organising?
  • Does this read like sociology or segmentation strategy?
  • Are these categories falsifiable? Could data contradict them?

Warning signs: Categories with catchy names designed for easy recall. Language about groups being "primed" or "ready" to take action. Frameworks that conveniently identify exactly who the organisation wants to mobilise.

How to cover advocacy research responsibly

None of these red flags automatically disqualify a story. Advocacy organisations sometimes fund solid research. The issues they highlight are often legitimate. But when you're working with advocacy-commissioned material, here's what responsible coverage requires:

Transparent disclosure: State clearly in your opening paragraphs that the research comes from an advocacy organisation. Explain what they advocate for. Don't bury this in the paragraph 12.

Independent validation
: Find researchers who study this topic but aren't affiliated with advocacy groups on either side. Ask them to assess the methodology and conclusions.

Competing perspectives: 
Include responses from the organisations or industries being criticised. Get alternative interpretations of the data. Present the advocacy position as one view amongst several.

Methodology scrutiny
: Ask to see the actual survey questions, sampling methodology, and statistical analysis. If they won't share it, that tells you something important.

Clear attribution: 
Don't present advocacy positions as neutral expertise. Make it explicit: "According to the organisation, which campaigns for stricter AI regulation..." Not: "Experts say..."

Reframe prescriptive language
: Change "Newsrooms must adopt..." to "The organisation argues newsrooms should..." You're reporting their position, not endorsing it.

Why this matters now

Advocacy organisations have become dramatically more sophisticated in how they target media. The solution isn't to ignore them or refuse to cover important issues. It's to maintain the same scepticism and rigour you'd apply to any source with an agenda which, let's be honest, is most sources.

The bottom line

Your credibility is your only real asset, even more so in the age of slop when producing persuasive campaigns costs pennies. Once readers suspect you're laundering advocacy through your editorial authority, you've lost something probably impossible to recover.

Sometimes the hardest editorial decision is the simplest one: this doesn't meet our standards. Better to publish nothing than something that compromises the trust your audience places in you.

The advocacy groups will survive without your megaphone. Your credibility might not.

What tactics have you encountered from advocacy organisations trying to shape coverage? We're interested in hearing from journalists and editors about the PR operations targeting media outlets. Let us know at marcela@journalism.co.uk

Share with a colleague

Written by

Comments