Newsroom Policies for AI in Journalism

The third briefing from the AI and Journalism Research Working Group finds that organizational AI policies tend to prioritize principles and values over practical guidance.

AI-generated image created using Adobe Firefly

TL;DR

Neither journalism nor journalistic values stay exactly the same over time; technological changes have always raised new questions.

Newsroom and professional guidance about the use of AI are not yet ubiquitous.

Research finds that newsrooms with AI policies prioritize transparency about the use of AI, human supervision of AI tools and human verification of outputs.

Research also finds that newsroom AI policies are ill equipped to address subtle biases that may be built into third-party tools.

Table of Contents

Introduction

AI governance is a complex ecosystem, incorporating policy instruments ranging from global compacts and legally binding domestic regulation to best practice standards and industry guidelines.1 In December 2025, CNTI published a review of 188 governmental policy instruments and their impact on journalism, with a primary focus on legally binding legislation in various states of approval.2

In parallel, CNTI’s AI and Journalism Research Working Group reviewed the state of research on AI governance within newsrooms, including research on ethical implications and newsroom policy development for other emerging technologies. This briefing synthesizes 30 recent research papers.

About

This is the third in a series of reports from the AI and Journalism Research Working Group convened by the Center for News, Technology & Innovation (CNTI). The working group currently consists of 18 cross-industry members from around the world, bringing research, journalism and technology expertise to the discussions. 

The goal of the working group is to offer succinct summaries of global research in specific topics at the intersection of journalism and AI. Each quarter, the working group synthesizes the state of research across two to three topics for journalism practitioners, researchers and industry leaders around the world, focusing on actionable recommendations for journalism — not other fields that are concerned with AI.

In each report, we lay out the general findings of the research to date, considerations and/or actions for practitioners and areas where more or new research is needed. This report was prepared by the research and professional staff of CNTI in partnership with several external contributors who collectively authored this briefing. If you have ideas or research findings that are important for CNTI and the working group to include, please email them to info@cnti.org.

What do we mean by “AI”?

This report uses the OECD definition: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Wherever possible, we try to use specific terms rather than “AI” to avoid conflation or confusion. Journalism has been adopting forms of automation for more than 50 years,3 but widespread use of the term “AI” is more recent — and may include both newer technologies and those that have been in use for quite some time.

Newsroom Policies Impacting AI in Journalism

Several studies show that while AI is being used in newsrooms, formal codification within newsrooms and professional societies is not yet universal, and there are still barriers to implementing AI policies.4 The working group reviewed 30 research articles addressing AI governance impacts on journalism, including policies developed by newsrooms and press associations, technology companies and governments. 

The speed of research is slower than the speed of policy development, which in turn lags behind technological development: it may well be that newsrooms have updated, added or advanced internal policies since these papers were published. Nonetheless, the takeaways and cautions from this research are still valuable, especially as journalists continue to develop policies.

Findings

  • Neither journalism nor journalistic values stay exactly the same over time; technological changes have always raised new questions.
  • Newsroom policies on new technologies tend to emphasize principles and values but do not often offer practical guidance. It would be valuable for policies to include more detail on algorithms and systems in addition to outputs, and to lay out considerations for working with third-party tools.
    • In particular, guidelines for procurement are rarely addressed, even though the tools’ underlying algorithms may subtly influence media organizations’ editorial decisions.5 
  • When developing guidelines for the use of new technologies, it is essential to include people with different personal and professional backgrounds to ensure guidelines address a broad range of use cases and impacts.

Newsroom and professional guidance about the use of AI are not yet ubiquitous. As of late 2024, about 80% of the 221 Global South journalists surveyed by Thomson Reuters Foundation said their newsrooms have no AI policy.6 This number has almost certainly changed since then. However, what remains relevant is the barriers to developing and implementing policies identified here and in additional studies from around the world — including Germany, Greece, the Netherlands and Kenya.7 Barriers include a lack of access to technical expertise, difficulty in getting input and buy-in across organizations, the speed of technological change, the absence of regulatory frameworks in some places and the difficulty of complying with existing regulation in others. All the same, across contexts, journalists express the desire for guidelines and oversight. 

The newsrooms that do have AI policies share a similar approach, prioritizing transparency about the use of AI, human supervision of AI tools and human verification of outputs.8 However, few of these guidelines operationalize these priorities concretely or include clear oversight mechanisms. For example, some guidelines reference “proper” or “appropriate” uses without defining these terms. To date, three peer-reviewed papers explore newsroom policies, stylebooks and standards on the use of AI tools in journalism.9 Between them, these papers included 97 distinct policies from the European Union, Latin America as a region and 22 individual countries. 

These researchers also find that newsroom AI policies are ill equipped to address subtle biases that may be built into third-party tools. The guidelines focus more on AI outputs than on the systems themselves,10 are more concerned with generative than analytic AI11 and rarely, if ever, provide practical guidance for working with third-party technologies.12 For example, as outlined in the working group’s Transcription and Translation briefing, AI translation tools can introduce biases that may be difficult for non-experts to detect — like assuming doctors are men and nurses are women. These types of subtle biases exist beyond translation. By not addressing these concerns, newsroom AI policies fail to recognize that AI tools can harm journalistic integrity — and potentially journalistic independence — in ways that are difficult to detect. 13

In particular, few policies have clearly articulated when relying on third-party tools is appropriate and inappropriate. In theory, these concerns can be addressed through organizational procurement policies and guidelines that clearly identify the risks of different uses, particularly regarding data privacy and confidentiality. Concerns include the possibility that technology companies become indirectly involved in the newsroom, specifically, in the development of content.14 The major AI developers are primarily platform companies — including Google, Microsoft and Amazon15 — and newsrooms have long been at least somewhat dependent on them for distribution. Interviews with newsworkers suggest AI adoption is increasing dependence on platform companies, especially on the news production side.16 These concerns may be exacerbated in the Global South, because nearly all the early newsroom AI policies come from the Global North, while later policies borrow from them without necessarily addressing context-specific concerns,17 such as transcription quality issues. 

Procurement is an area where there has been little research to date. A 2025 study that considered 16 AI tools’ terms of service alongside interviews with decision-makers in newsrooms identified an ongoing challenge; most contracts granted developers the right to change the terms of service and the conditions without notice.18 This study also highlighted relative asymmetries between news organizations and AI developers as a barrier to managing risk contractually, a concern which may not even rise to awareness among individual journalists. While proprietary and local tools may have lower (or at least more customizable) risks than off-the-shelf ones, only a small number of the largest and wealthiest news organizations can practically build their own tools. (One promising recent conference paper explored how collaboratively governed and built LLMs could support the journalism field and address precisely this problem — but much more work is needed in this area.19)

Global perspectives

Working group member Claudia Báez shares her perspective:

“In my experience working with AI in Latin American newsrooms, there is a clear gap between having an AI policy ‘on paper’ and making it widely accessible for everyone or democratizing it. While large legacy and digital organizations often create formal frameworks or transparency statements, these documents are rarely integrated into the newsroom’s daily workflow. As a result, journalists use AI frequently without oversight. They work with the organization’s information using personal AI tools, sometimes free versions that offer no meaningful data protection and could accidentally make sensitive company information public. Then the ‘human-in-the-loop’ is retained as a concept rather than as a practical safety measure. 

“The recent crisis at El Espectador in Colombia, where AI-generated misinformation went unnoticed for months, underscores the risks of this oversight gap. These examples speak to the importance of ensuring that policy development does not only live on paper but includes active connections to and evaluation of practices. One promising example comes from La Silla Rota, a Mexican legacy media organization, which has created an internal AI policy tool for its team. This simple custom GPT is shared with the newsroom to answer journalists’ questions about when to use AI and when not to. practitioners, AI technologists and development policy actors.”

Earlier guidelines addressing other new technologies in the newsroom — such as photo editing and social media — provide some useful parallels. As with AI policies, these policies typically start by articulating what journalism is and should be before highlighting how technology can support it and what uses are unacceptable. Like AI policies, photo policies did not always operationalize their values clearly; journalists and editors might disagree about what constitutes “excessive” retouching.20

It is also common for social media policies to emphasize that journalists’ social media use must be consistent with existing journalistic values, ethics and procedures — including transparency and verification.21 Several researchers have also found that social media policies often protect news organizations — sometimes at the expense of individual journalists.22 Several studies of social media policies have found that differences in lived experience between editors developing the policies and reporters following them likely contributed to gaps in policies. In general, research analyses of various newsroom policies have concluded that including more stakeholders with varying job responsibilities and life experiences contributes to stronger policies.23 A study that analyzed journalists’ tweets found that even when social media policies restrict their speech, journalists generally follow them.24 Both the value of including more stakeholders and journalists’ general willingness to follow organizational policies are also likely to apply to AI policies.

Where More Research Would Be Helpful

  • More research is needed on newsroom policies outside of the EU. There is some research, but much of it relies on data collected before the public release of ChatGPT and thus the widespread use of generative AI tools.
  • There is also very little research on procurement, platform dependency, or relationships between newsrooms and technology companies outside the European context. 
  • As technology and its use matures in news organizations, there is a need for more empirical and descriptive research, in addition to the early theoretical work.
  • There is a lack of specific guidance for journalists and media organizations, especially regarding the use of third-party tools that may not be transparent. This is a particularly important gap since the research shows these tools may impact editorial decisions inconspicuously. 
  • Given the distinct scopes, contexts and resources of different newsrooms, it is also important to examine how AI guidelines are being operationalized in daily workflows, as well as who participates in policy creation within the newsroom.

Current working group members

A list of current working group members and their affiliations is shown here:

Akintunde Babatunde
Executive Director, Centre for Journalism Innovation and Development

Claudia Báez 
Associate Consultant, Fathm

Jay Barchas-Lichtenstein
Senior Research Manager, Center for News, Technology & Innovation

Madhav Chinnappa
Independent Media Consultant

Utsav Gandhi
PhD Student, University of Illinois Chicago

K.V. Kurmanath
Senior Journalist and Academic

Amy Mitchell
Executive Director, Center for News, Technology & Innovation

Chris Moran 
Head of Editorial Innovation, Guardian News & Media

Sophie Morosoli
Postdoctoral Researcher at the AI, Media & Democracy Lab, University of Amsterdam

Gary Mundy
Director Research, Policy and Impact, Thomson Foundation

Oluwapelumi Oginni
Project Manager, AI Initiatives, Centre for Journalism Innovation and Development

Joshua Olufemi
Executive Director, Dataphyte Foundation

Oluseyi Olufemi
Nigeria Country Director, Dataphyte

Esteban Ponce de León
Resident Fellow, Digital Forensic Research Lab (DFRLab) at the Atlantic Council

Amy Ross Arguedas
Research Fellow at the Reuters Institute for the Study of Journalism

Zara Schroeder
Researcher, Research ICT Africa

Felix M. Simon
Research Fellow in AI and News, Reuters Institute for the Study of Journalism & Research Associate, Oxford Internet Institute, University of Oxford

Scott Timcke
Senior Research Associate, Research ICT Africa

Jaemark Tordecilla
Independent Media Advisor, Philippines

References

Becker, K. B., Simon, F. M., & Crum, C. (2025). Policies in Parallel? A Comparative Study of Journalistic AI Policies in 52 Global News Organisations. Digital Journalism, 13(9), 1578-1598. https://doi.org/10.1080/21670811.2024.2431519

Cools, H., & Diakopoulos, N. (2023, July 10). Towards Guidelines for Guidelines on the Use of Generative AI in Newsrooms. Generative AI in the Newsroom. https://generative-ai-newsroom.com/towards-guidelines-for-guidelines-on-the-use-of-generative-ai-in-newsrooms-55b0c2c1d960

de-Lima-Santos, M.-F., Yeung, W. N., & Dodds, T. (2024). Guiding the way: A comprehensive examination of AI guidelines in global media. AI & SOCIETY. https://doi.org/10.1007/s00146-024-01973-5

Dodds, T., Vandendaele, A., Simon, F. M., Helberger, N., Resendez, V., & Yeung, W. N. (2025). Knowledge Silos as a Barrier to Responsible AI Practices in Journalism? Exploratory Evidence from Four Dutch News Organisations. Journalism Studies, 26(6), 740–758. https://doi.org/10.1080/1461670X.2025.2463589

Duffy, A., & Knight, M. (2019). Don’t be Stupid: The role of social media policies in journalistic boundary-setting. Journalism Studies, 20(7), 932–951. https://doi.org/10.1080/1461670X.2018.1467782

Goodson, K., Barchas-Lichtenstein, J., Jens, S., Wright, E., & Gandhi, U. (2025). Journalism’s New Frontier: An Analysis of Global AI Policy Proposals and Their Impacts on Journalism. Center for News, Technology & Innovation. https://cnti.org/reports/journalisms-new-frontier-an-analysis-of-global-ai-policy-proposals-and-their-impacts-on-journalism/

Harlow, S. (2023). Protecting News Companies and Their Readers: Exploring Social Media Policies in Latin American Newsrooms. In Digital Journalism in Latin America. Routledge.

Herrera-Damas, S. (2014). Recurring topics in the social media policies of mainstream media. Journal of Applied Journalism & Media Studies, 3(2), 155–173. https://doi.org/10.1386/ajms.3.2.155_1

Hofeditz, L., Jung, A.-K., Mirbabaie, M., & Stieglitz, S. (2025). Ethical Guidelines for the Application of Generative AI in German Journalism. Digital Society, 4(1), 4. https://doi.org/10.1007/s44206-024-00151-w

Ifayemi, S., Tabassi, E., & Deckard, A. C. (2025, July 28). Decoding AI Governance: A Toolkit for Navigating Evolving Norms, Standards, and Rules. Partnership on AI. https://partnershiponai.org/resource/decoding-ai-governance/

Kalfeli, P., & Angeli, C. (2025). The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives. Societies, 15(2). https://doi.org/10.3390/soc15020022

Lefèvre, B., Errando, A., Afilipoaie, A., Ranaivoson, H., & Wiart, L. (2025). Exploring ethical and regulatory challenges of AI integration in European Union Newsrooms. Media Studies, 16(31), 31–55.

Lu, S., Wei, L., & Liang, H. (2025). Social Media Policies as Social Control in the Newsroom: A Case Study of the New York Times on Twitter. Journalism Studies, 26(5), 568–586. https://doi.org/10.1080/1461670X.2025.2452265

Mari, W. (2024). The Pre-History of News-Industry Discourse Around Artificial Intelligence. Emerging Media, 2(3), 499–522. https://doi.org/10.1177/27523543241279577

Mbaabu, N. M. (2025). Examining the status of AI use guidelines in editorial policies of Kenyan digital media houses and challenges in their formulation and implementation in newsrooms [Master of Arts in Digital Journalism]. Aga Khan University.

Miller, K. C., & Nelson, J. L. (2022). “Dark Participation” Without Representation: A Structural Approach to Journalism’s Social Media Crisis. Social Media + Society, 8(4), 20563051221129156. https://doi.org/10.1177/20563051221129156

Molyneux, L., & Nelson, J. L. (2024). “Let’s Not Tank the Reputation of This Organization.” How Newsroom Social Media Policies Exacerbate Journalism’s Labor Crisis. Journalism Studies, 25(9), 931–950. https://doi.org/10.1080/1461670X.2023.2263797

Peck, G. A. (2023). The First Draft of AI Policy. Editor & Publisher, 156(9), 32–34.

Piasecki, S., & Helberger, N. (2025). A nightmare to control: Legal and organizational challenges around the procurement of journalistic AI from external technology providers. The Information Society, 41(3), 173–194. https://doi.org/10.1080/01972243.2025.2473398

Porlezza, C. (2024). The datafication of digital journalism: A history of everlasting challenges between ethical issues and regulation. Journalism, 25(5), 1167–1185. https://doi.org/10.1177/14648849231190232

Radcliffe, D. (2025). Journalism in the AI Era: Opportunities and Challenges in the Global South. https://doi.org/10.13140/RG.2.2.16814.63043

Sacco, V., & Bossio, D. (2017). Don’t Tweet This!: How journalists and media organizations negotiate tensions emerging from the implementation of social media policy in newsrooms. Digital Journalism, 5(2), 177–193. https://doi.org/10.1080/21670811.2016.1155967

Sánchez-García, P., Diez-Gracia, A., Mayorga, I. R., & Jerónimo, P. (2025). Media Self-Regulation in the Use of AI: Limitation of Multimodal Generative Content and Ethical Commitments to Transparency and Verification. Journalism and Media, 6(1), Article 1. https://doi.org/10.3390/journalmedia6010029

Seipp, T. J., Helberger, N., De Vreese, C., & Ausloos, J. (2024). Between the cracks: Blind spots in regulating media concentration and platform dependence in the EU. Internet Policy Review, 13(4). https://doi.org/10.14763/2024.4.1813

Simon, F. M. (2022). Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy. Digital Journalism, 10(10), 1832–1854. https://doi.org/10.1080/21670811.2022.2063150

Simon, F. M. (2024). Escape Me If You Can: How AI Reshapes News Organisations’ Dependency on Platform Companies. Digital Journalism, 12(2), 149–170. https://doi.org/10.1080/21670811.2023.2287464

Simon, F. M. (2025). Rationalisation of the news: How AI reshapes and retools the gatekeeping processes of news organisations in the United Kingdom, United States and Germany. New Media & Society, 14614448251336423. https://doi.org/10.1177/14614448251336423

Solaroli, M. (2017). News Photography and the Digital (R)evolution: Continuity and Change in the Practices, Styles, Norms and Values of Photojournalism. In J. Tong & S.-H. Lo (Eds.), Digital Technology and Journalism: An International Comparative Perspective (pp. 47–70). Springer International Publishing. https://doi.org/10.1007/978-3-319-55026-8_3

Timcke, S., & Schroeder, Z. (2025, July 10). M20 Policy Brief 4: Power, Politics, and Economics – AI, Africa and the G20. Media20. https://media20.org/2025/07/10/m20-policy-brief-4-power-politics-and-economics-ai-africa-and-the-g20/

Tseng, E., Young, M., Le Quéré, M. A., Rinehart, A., & Suresh, H. (2025). “Ownership, Not Just Happy Talk”: Co-Designing a Participatory Large Language Model for Journalism. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 3119–3130. https://doi.org/10.1145/3715275.3732198

Umejei, E., Ayisi, A., Phiri, M., & Tallam, E. (2025). Artificial Intelligence and Journalism in Four African Countries: Optimists, Pessimists, and Pragmatists. Journalism Practice, 19(10), 2249–2265.https://doi.org/10.1080/17512786.2025.2489590

van Drunen, M. Z. (2025). Safeguarding media freedom from infrastructural reliance on AI companies: The role of EU law. Telecommunications Policy, 102990.https://doi.org/10.1016/j.telpol.2025.102990

Appendix

Papers referenced in this briefing

PaperScope & Methods
Becker et al., 2025AI policies: 52 news organizations in 12 countries
Cools & Diakopoulos, 2023AI policies: 21 guidelines
de-Lima-Santos et al.AI policies: 37 guidelines (from news organizations & coalitions) in 17 countries
Dodds et al., 2025Responsible AI: 14 semi-structured interviews with Dutch editors, managers and journalists
Duffy & Knight, 2019Social media policies: 17 news organizations in 4 countries
Goodson et al., 2025AI policies: 188 legal frameworks, laws and bills
Harlow, 2023Social media policies: survey of 1,094 Latin American journalists
Herrera-Damas, 2014Social media policies: 22 newsrooms
Hofeditz et al. 2025AI policies: 18 in-depth interviews with German AI and journalism experts and a review of existing guidelines
Kalfeli & Angeli, 2025AI policies: 28 semi-structured interviews with Greek journalists and academics
Lefèvre et al., 2025Political economy analysis of the use of AI tools in newsrooms: 30 key documents and 41 interviews with media professionals and regulatory experts in Belgium, France and Spain
Lu et al., 2025Social media policies: 185,969 tweets from 549 news workers
Mbaabu, 2025AI policies: semi-structured interviews with 14 editors and reporters from 4 Kenyan outlets
Miller & Nelson, 2022Social media policies: discourse analysis and interviews with 37 U.S. journalists
Molyneux & Nelson, 2024Social media policies: discourse analysis and interviews with 37 U.S. journalists
Peck, 2023AI use: survey of one newsroom to understand uses to inform newsroom policy
Piasecki & Helberger, 2025AI procurement: 12 semi-structured interviews with newsroom decision-makers and review of 16 terms and conditions documents
Porlezza, 2024Ethics codes: 15 publicly accessible codes of ethics in English, French, German or Italian
Radcliffe, 2025Broad survey of 221 journalists in Global South newsrooms
Sacco & Bossio, 2017Social media policies: interviews with 25 editors and reporters at major Australian media companies
Sánchez-García et al., 2025AI policies: 26 news outlets & 18 international entities
Seipp et al., 2024Legal research methods
Simon, 2022Broad research agenda for relationships between newsrooms and AI companies
Simon, 2024AI use: 121 interviews with news workers in the US, UK and Germany as well as 31 expert interviews
Simon, 2025AI use: 143 interviews with news workers in the US, UK and Germany
Solaroli, 2017Photo policies: archive content, interviews, ethnography
Timcke & Schroeder, 2025Policy analysis
Tseng et al., 2025LLM design for journalism: co-design and 20 interviews with journalists 
Umejei et al., 2025AI use: semi-structured interviews with 32 full-time and freelance journalists & editors in four African countries
van Drunen, 2025EU Media Freedom Act: legal analysis


Footnotes

  1. Ifayemi et al., 2024 ↩︎
  2. Goodson et al., 2025 ↩︎
  3. Mari, 2024 ↩︎
  4. Porlezza 2024; Lefèvre et al., 2025 ↩︎
  5. Lefèvre et al., 2025; van Drunen, 2025 ↩︎
  6. Radcliffe, 2025 ↩︎
  7. Dodds et al., 2025; Hofeditz et al., 2025; Kalfeli & Angeli, 2025; Mbaabu, 2025; Umejei et al., 2025 ↩︎
  8. Becker et al., 2025; de-Lima-Santos et al., 2024; Sánchez-García et al., 2025 ↩︎
  9. Becker et al., 2025; de-Lima-Santos et al., 2024; Sánchez-García et al., 2025, See also Cools & Diakopoulos, 2023 and Peck, 2023 for shorter pieces that have not been peer-reviewed. ↩︎
  10. Becker et al., 2025, Porlezza, 2024 ↩︎
  11. Sánchez-García et al., 2025 ↩︎
  12. de-Lima-Santos et al., 2024; see also van Drunen, 2025 ↩︎
  13. Becker et al., 2025; de-Lima-Santos et al., 2024; Sánchez-García et al., 2025 ↩︎
  14. Simon, 2024; Simon, 2025; van Drunen, 2025. See also Timcke & Schroeder, 2025. ↩︎
  15. Simon, 2022 ↩︎
  16. Simon, 2024; see Seipp et al., 2024 for a policy perspective ↩︎
  17. de-Lima-Santos et al., 2024 ↩︎
  18. Piasecki & Helberger, 2025 ↩︎
  19. Tseng et al., 2025 ↩︎
  20. Solaroli, 2017 ↩︎
  21. Duffy & Knight, 2019; Herrera-Damas 2014; Harlow, 2023; Sacco & Bossio, 2017 ↩︎
  22. Harlow, 2023; Miller & Nelson, 2022; Molyneux & Nelson, 2024 ↩︎
  23. Miller & Nelson, 2022; Molyneux & Nelson, 2024 ↩︎
  24. Lu et al., 2025 ↩︎