UN Press Releases: 13% AI Adoption Signal Explained (2025)

Late 2024 research finds 13%+ UN press releases flagged AI-assisted. What does this mean for institutional transparency, and how should leaders respond?

UN
Image Source: statics.mylandingpages.co

Updated on: 2025-10-03

Institutional AI writing is moving from curiosity to norm. A widely discussed estimate suggests that more than 13% of UN English-language press releases were AI-assisted by late 2024. Crucially, that figure comes from external academic analysis using population-level detection—not an official United Nations statistic. This nuance matters: the signal is real and meaningful, but it’s not a verdict on any single document, nor a claim issued by the UN itself.

Below, we unpack what the research actually measured, how to interpret it responsibly, and what communications leaders in IGOs, NGOs, and the public sector should do now to ensure transparency, quality, and trust.

What the new research actually measured

A cross-sector preprint, “The Widespread Adoption of Large Language Model-Assisted Writing Across Society” (2025), analyzes indicators of LLM-assisted drafting across domains, including the UN’s press releases. The authors report that UN releases show a steady rise from roughly 3% in early 2023 to nearly 14% by late 2024—again, an external estimate, not official UN data. See the study’s abstract and methods in the arXiv preprint by Liang, Zou, and colleagues (2025).

Science newswires have echoed this trend and broadened the cross-sector context, noting that corporate press releases may approach a quarter of content being LLM-attributable, while the UN subset crossed the ~13% mark by late 2024. For a concise overview with author quotes, see the EurekAlert press summary (2025).

Two important clarifications about the study’s approach:

  • Population-level detection: The researchers estimate prevalence at corpus scale. The method is designed to gauge overall adoption trends rather than make definitive judgments about individual documents.
  • Conservative framing: The authors describe limitations and caution against item-level certainty, acknowledging that evolving models and writing practices complicate detection.

How to interpret detection signals without overreach

Leaders should treat the “>13%” signal as directional evidence of mainstream adoption—not proof that a specific release was AI-generated. Academic work underscores why: under paraphrasing or light editing, detectors become brittle, and even the best approaches can struggle to reliably discriminate human versus AI text at the level of individual items. See the cautionary findings in Sadasivan et al., “Can AI-Generated Text be Reliably Detected?” (2023).

Implications for policy and practice:

  • Don’t police individuals with detectors. Use them, if at all, to study patterns at scale and to inform process improvements, not for punitive item-level decisions.
  • Keep humans accountable. Detection is not a substitute for editorial oversight, fact-checking, or ethical review.
  • Expect drift. As tools and writing norms evolve, today’s signals may weaken; your policies should focus on process quality, not on catching AI per se.

Why this matters for institutional communications

Public trust, transparency, and compliance sit at the core of institutional communications. While there is no single, UN-wide public rule that specifically addresses disclosure for AI-assisted drafting of press releases (as of this writing), broader UN governance materials emphasize transparency and accountability as key principles in digital transformation and AI.

  • The 2024 UN DESA E‑Government Survey provides system-level context on digital public services and the importance of transparency in service delivery; see the UN DESA E‑Government Survey 2024 (PDF)%20E-Government%20Survey%202024%201392024.pdf).
  • The UN High-Level Advisory Body’s report, “Governing AI for Humanity” (2024), frames transparency, accountability, and human rights as foundational for AI governance across the UN system.

Taken together, the research signal and UN governance framing point to the same operational mandate: make AI use transparent where material, keep rigorous human oversight in place, and embed quality controls that stand up to public and journalistic scrutiny.

A practical workflow to operationalize responsible AI writing

Below is a practitioner-tested, governance-forward workflow for press releases and official statements. It assumes AI tools may assist with drafting, editing, summarization, or translation, but that human editors own truth, tone, and accountability.

  1. Define materiality and intent
  • Decide what counts as “AI-assisted” in your context: ideation, outline, first draft, copy edits, translation.
  • Establish thresholds for disclosure (e.g., any AI-generated drafting beyond light grammar correction triggers a short page notice).
  1. Draft with auditability in mind
  • Keep prompts and draft versions in a versioned workspace.
  • Bind claims to sources as you write; maintain a citations log.
  1. Human editorial review
  • Senior editor (or accountable officer) signs off on accuracy, tone, risk, and alignment with policy.
  • Perform bias, human rights, and sensitive-language checks when relevant.
  1. Quality and compliance gates
  • Verify every fact and quote against primary sources.
  • Run accessibility checks (readability, alt text, headers) and ensure authorship attribution.
  1. SEO and distribution hygiene
  • Ensure descriptive titles, meta descriptions, and clean internal/external linking to authoritative sources.
  • Maintain an “Updated on” line and a transparent change-log for substantive edits.
  1. Publish and monitor
  • Track engagement, press pickup, and SERP performance. Adjust templates and guidance based on outcomes.

Example tool-supported implementation: Teams can structure this workflow inside QuickCreator—from AI-assisted draft to human edit, citation binding, SEO checks, and one-click WordPress publishing—which supports collaboration and version tracking. Disclosure: QuickCreator is our product.

For a hands-on walkthrough of task sequencing and guardrails, see this internal guide on a stepwise setup: Step-by-step guide to using QuickCreator for AI content.

Sample disclosure language you can adapt:

  • Short page notice: “This communication was drafted with assistance from AI tools and reviewed by human editors. Facts and sources were verified.”
  • Central policy page: “Our communications may use AI for drafting, summarization, and translation. We disclose material AI assistance on content pages and maintain human editorial oversight throughout.”

Governance and disclosure playbook (checklist)

Use this checklist to build durable policy without hampering productivity.

  • Definitions and scope
    • Define “AI-assisted writing” and “material assistance.” Include drafting, editing, summarization, and translation.
  • Tiered disclosure
    • Short notice on pages where AI materially shaped the text; comprehensive policy on a central governance page.
  • Editorial accountability
    • Require senior editor sign-off; maintain version histories and prompt logs for auditable records.
  • Detector policy
    • Prohibit using item-level detectors as evidence of misconduct; reserve for aggregate monitoring and process improvement.
  • Measurable QA standards
    • Set targets: citations per 1,000 words, error-rate thresholds, and turnaround SLAs for corrections.
  • Risk management
    • Flag sensitive topics (health, conflict, elections) for additional review and legal/ethics checks.
  • Training and culture
    • Regularly train staff on prompt design, bias awareness, and disclosure norms. Refresh quarterly.

SEO and E‑E‑A‑T hygiene for institutional content

Search visibility is increasingly tied to transparency and demonstrated expertise. Even for press releases, applying E‑E‑A‑T-style hygiene helps credibility and discoverability:

  • Clear authorship and accountability: name the department or official spokesperson; provide contact details.
  • Transparent sourcing: cite primary documents and data pages; avoid thin, unsubstantiated claims.
  • Update discipline: maintain visible update stamps and change-logs, especially for evolving stories.
  • Media assets and accessibility: use descriptive alt text, transcripts for audio/video, and structured headings.

These practices dovetail with the governance checklist above and reduce reputational risk while improving user experience.

Mini change-log and what to watch next

Because this topic is evolving, keep a simple change-log on your policy page and refresh it regularly:

  • Study status: The LLM-assisted writing study is currently available as an arXiv preprint (Oct 2025). Monitor if and when a peer‑reviewed journal version is published. Reference: arXiv preprint (2025).
  • Methodology critiques: Track credible methodological critiques, especially around detector robustness and domain representativeness.
  • Institutional clarifications: Watch for any UN system-wide or agency-level statements clarifying disclosure or editorial policies around AI-assisted drafting.

What leaders should do next

  • Formalize your definitions and thresholds for disclosure; publish a central policy and add short page notices where material.
  • Stand up an editorial QA process with named accountability, version control, and auditable records.
  • Treat detectors as directional tools only; focus policy on human oversight and measurable quality standards.
  • Implement the workflow above across communications teams; start with a small pilot, then scale.
  • Build an update cadence: review policies monthly, and refresh guidance as tools and norms evolve.

If you’re evaluating tooling for structured, collaborative workflows that emphasize quality and transparency, explore the capabilities of our AI Blog Writer to support AI-assisted drafting with human-in-the-loop editing, SEO checks, and WordPress publishing.


Citations and further reading (selected):

Spread the Word

Share it with friends and help reliable news reach more people.

You May Be Interested View All

How to Find Out the Prompts Your Customers Are Asking on ChatGPT Post feature image

How to Find Out the Prompts Your Customers Are Asking on ChatGPT

ChatGPT vs Perplexity vs Copilot: Citation Verifiability Comparison 2025 Post feature image

ChatGPT vs Perplexity vs Copilot: Citation Verifiability Comparison 2025

How to Activate Reddit for Generative Engine Optimization (GEO) Post feature image

How to Activate Reddit for Generative Engine Optimization (GEO)

How to Implement GEO: Step-by-Step Guide for Marketing Teams with Geneo Post feature image

How to Implement GEO: Step-by-Step Guide for Marketing Teams with Geneo