Introduction
This is the first in a series of reports from the AI and Journalism Research Working Group convened by the Center for News, Technology & Innovation (CNTI). The working group currently consists of more than 15 cross-industry members from around the world, bringing research, journalism and technology experience to the discussions. Each quarter, we’ll synthesize the state of global research across two or three questions or topics at the intersection of journalism and AI.
The goal of the working group is to summarize research for an audience of journalism practitioners and researchers around the world. That means we focus on what’s actionable for journalism, not other fields that are concerned with AI.
What do we mean by “AI”?
This report uses the OECD definition: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
Wherever possible, we try to use specific terms rather than “AI” to avoid conflation or confusion. Journalism has been adopting forms of automation for more than 50 years,1 but widespread use of the term “AI” is more recent — and may include both newer technologies and those that have been in use for quite some time.
The Focus of This Report
This briefing focuses broadly on journalistic communication about AI. We first summarize research and frameworks that address the following questions about societal AI literacy:
- What does the public currently know about AI, and what does the public need to know about AI?
We then move on to a synthesis of current research on journalists’ communication about AI in two different contexts:
- How should journalists and news organizations communicate about journalistic uses of AI?
- How do journalists report on AI and automation more broadly, and what are the consequences for their audiences?
In each section, we lay out the general findings of the research to date, any suggested considerations or actions for practitioners that emerge and areas where more or new research is needed. A clear takeaway across the board is that journalists and news organizations need to communicate more clearly about journalism itself — not only about AI.
This report was prepared by the research and professional staff of CNTI in partnership with several external contributors who collectively authored this briefing.
If you have ideas or research findings that are important for CNTI and the working group to include, please email them to info@cnti.org.
What the public knows, and needs to know, about AI
There is increasing consensus across fields that the public requires more knowledge about AI — including about different types of systems, what these systems can and can’t do and how to critically assess their output.2 Despite the publication of several systematic reviews, there is not yet a consensus on exactly what so-called “AI literacy” should include, other than that it should cover both computational and ethical dimensions.3 Nor is it clear how to include “AI literacy” in broader conceptions of media and information literacy.
While the field of journalism can’t be expected to address AI literacy alone, news reports are one of the primary ways that adults learn about new technologies.4 As a consensus begins to emerge about what people should know, the role of journalism in AI literacy may also become clearer.
Global Perspectives
The way AI development, use and literacy are playing out in the Global North and the Global South varies widely. What journalists know and need to know — before we even consider the broader public — simply does not look the same in different political and economic contexts with variable infrastructure and educational systems.
Working group member Oluwapelumi Oginni shares her perspective about AI and journalism in Africa:
“While AI adoption in journalism is becoming a well-explored topic in the Global North, the conversation in the Global South still requires more exploration, especially within the context of the real on-the-ground barriers around skills and resources. The discussion is not just about whether journalists want to use AI, but whether they can. Effective use of AI in newsrooms depends heavily on two things: first, journalists and editors need to understand what these technologies can do (AI literacy), and second, they need the financial capacity to implement them. In many African countries, and indeed, in the Global South, that second part is a serious roadblock…5
[W]hile the idea of AI in journalism is gaining ground, the infrastructure, funding, and training needed to make it work at scale are still very much missing.6 What this means, ultimately, is that journalists and media houses in the Global South, particularly in Africa, are operating at a double disadvantage. Though they are expected to keep pace with global innovation cycles, many lack the financial means and technical training needed to truly compete or drive innovation. This makes it all the more important for research to explore, in depth, the uneven realities of media systems operating in low-resource environments.”
The education field offers a popular framework7 that breaks down AI literacy into four components:
- Literacy practices (the ability to understand and evaluate AI, which people can demonstrate directly)
- Core values (which support learners to use the tools safely and effectively)
- Modes of engagement with the tools (understanding, evaluating and using AI tools)
- Types of use (the purposes for which learners use AI tools)
Under this framework, journalism may be well suited to help adults build literacy practices (rather than core values or specific types of use). Literacy practices include holistic and fundamental knowledge that crosses social and technical domains. However, it is unlikely that journalism can — or should — directly teach adults how to use AI tools.
Within the realm of literacy practices, some concepts are more fundamental than others. A 2024 synthesis8 highlights three key facets that differentiate AI literacy practices from other technological literacies:
- First, AI is (relatively) autonomous. It can act without human intervention and sometimes have material impacts on the world without humans knowing it was used.
- Second, AI can learn. Its input data can enable automated improvement.
- Third, AI is “inscrutable” – deeply challenging to understand or interpret. AI inference is not analogous to human reasoning. For the most part, its models and decision-making processes are opaque or “black-boxed.” They may only be understandable to a point, or only to people with advanced technical knowledge. That, in turn, can also make it hard to interpret outputs.
The authors of this synthesis also note that AI presents two additional qualities that violate long-held assumptions of human-technology interaction:
- AI has inconsistent outputs. Because models are increasingly probabilistic rather than deterministic, the same input will not always lead to the same output. For example, a calculator will give the same results to the same query every time; an algorithmic search engine may not. And ongoing changes and improvements to models may lead to inconsistencies over time.
- It is opaque. Because AI is “under the hood” of so many technologies and has no specific interface, users may not even know when they’re interacting with it.
These five aspects of AI sit at the intersection of understanding and awareness, where journalism is best poised to intervene. For example, these high-level points could be addressed through evergreen explainers and sidebars that outlets reuse across stories on AI.
The field of journalism may also be well positioned to help people understand how content delivery algorithms work. Understanding these algorithms can help people increase their agency over information, inform citizenship and strengthen community building, among other outcomes.9
Research we reviewed suggests:
- It is important for anyone explaining AI to focus on the humans behind AI systems and remind audiences that computers and humans have different and complementary strengths.10 In particular, “AI is a tool that uses existing data to make predictions or generate content. In other words, it is creating a best approximation … based on what has already been created by humans.”11
Aspects of AI literacy that may be most relevant for journalism:
- Journalism plays a part in supporting adults in learning about new technologies, including “AI literacy” and “algorithmic literacy.” It is important that the approach to coverage fully realizes and thinks holistically about this role.
- Any given outlet’s audience will have different background knowledge and different information needs. Journalists and outlets will need to ensure they understand their particular audience, rather than developing a universal blueprint to explain emerging technologies.
- Journalism may be well suited to connect the dots between technical explanations and social and ethical impacts,12 since they frequently draw similar connections in their work. On the other hand, journalists may not be as well positioned to train the public in how to use AI tools, since experiential learning is likely to be more effective.13
- Journalists are also in a strong place to report on funding, business models, government relationships and other political factors behind the development of AI and other new technologies.
Where more research would would be helpful:
- Before research can help journalism support public AI literacy, we need better answers to three big questions:
- What do people need to know about AI? (Relatedly, researchers should also consider what people currently know about AI so that efforts can meet the public where they are at this time.)
- What can journalism realistically teach people about AI?
- What are the impacts current reporting on AI has on AI literacy?
- Frameworks and validated measures for AI literacy — which will likely come from educators and psychometricians — would make it easier to measure the efficacy of particular reporting strategies on public AI literacy.14
Communicating about ai use in journalism
“Transparency” and “disclosure” are buzzwords in the AI and journalism space these days: dozens of toolkits and guidelines advise news organizations on their importance vis-à-vis the use of AI, and how to best communicate about its use. But there is much less agreement on the goals behind being transparent about AI use (including where and when it is needed), the value to audiences and what works best to improve audience understanding and awareness. Members of the public consistently say that they want transparency about AI use,15 but current data offers, at best, a mixed sense of what that actually means in practice. Given both the range of use cases for journalism and the need for better public understanding of both AI and journalism, we need more data on how the public uses and interprets disclosures in context.
Global Perspectives
Several surveys of journalists suggest that Global South journalists are highly likely to adopt AI tools and other new technologies.16 One recent survey found that more than eight in ten Global South journalists say they use AI for work, but less than one in five say their organization has policies about AI use.17
Working group member Zara Schroeder shares her perspective:
This gap highlights a pressing need for institutional guidance, training, and ethical frameworks to support responsible adoption.18 Without such policies, journalists may inadvertently compromise accuracy, transparency or privacy, especially in high-stakes reporting environments.19
Moreover, disparities in access to resources, infrastructure and AI literacy between Global South and Global North newsrooms may exacerbate existing inequalities in news production and distribution.20 Journalists in under-resourced contexts may rely on freely available generative AI tools without fully understanding their limitations, biases or risks of misinformation. 21
For example, in countries like Zimbabwe, Uganda, Kenya and South Africa journalists are using AI tools to automate time-consuming tasks such as generating basic news reports, transcribing interviews or summarizing press releases.22 Off-the-shelf tools like ChatGPT, Otter.ai (for transcription) and Google’s AI-based-speech-to-text services are being used to speed up workflows, especially in fast-paced newsrooms with limited staff.23
There is also a lot that the Global North can learn from homegrown African AI projects.
Journalists in Ghana and Nigeria have used machine learning models to help analyze large public datasets, such as government budgets, procurement records or COVID-19 case numbers to identify patterns, anomalies or potential corruption.24 AI-driven data analysis tools help uncover insights that would be difficult to detect manually, enabling journalists to produce in-depth, evidence-based stories.
Dataphyte, a Nigerian media and data analytics organization, launched Nubia, an open-source AI platform in 2022 designed to analyze large datasets and generate first-draft news stories.25 Users input raw data (e.g. government statistics), and Nubia produces narrative reports, templates, data breakdowns and visuals which journalists then refine and contextualize. This significantly speeds up complex data-driven reporting.
Another example is Dubawa, a West African fact-checking initiative, which has developed two AI-powered services; one is a WhatsApp chatbot that responds to user queries by drawing on previously verified fact-checks from Dubawa and the International Fact-Checking Network.26 The second AI service is an audio platform that monitors live radio broadcasts, transcribes audio in Nigerian and Ghanaian English dialects, identifies suspicious claims and helps journalists verify or debunk them.27
Overall trust in journalism and in the broader information environment also varies widely by region, as does knowledge about journalism. That makes it particularly difficult to generalize from studies that focus on a small number of countries. Local political pressures, media ownership structures and digital access all influence how AI tools are used and received by both journalists and audiences.
To build a more comprehensive understanding, future research should incorporate regionally diverse case studies and prioritise collaborations with local media organizations. This can help ensure that the global AI transition in journalism is inclusive, context-sensitive and responsive to the needs of journalists working in a wide range of environments.
In assessing this area of research, we considered over 20 studies related to AI transparency and disclosure, the full list of which can be found in the References section. Importantly, the discussion here focuses on transparency within the journalism industry, rather than transparency on platforms (e.g., Apple News) and social media sites (e.g., TikTok). There has been considerable research on transparency, especially compared with other questions related to AI and journalism, allowing for a more conclusive set of takeaways. However, it is important to note that the analytical methods, geographical ranges and specific research questions vary widely, from surveys and experiments to in-depth interviews with journalists or news audiences.
A number of studies have explored whether labelling could further harm societal trust in information, but the results have varied with even small differences in label text. For example, one experiment found that labeling content “AI-generated” reduces audience ratings of its accuracy but not their trust in news or journalists.28 Another found that a longer label specifying autonomous AI reduces perceived trustworthiness but not accuracy.29 A third study suggests that some of these impacts may decrease as people become more familiar with these technologies.30 Moreover, negative consequences of disclosure could align with other structured forms of bias. One recent study found that while both humans and LLMs rated identical news articles less favorably when an AI disclosure was present, the LLMs also penalized authors from particular racial or gender groups disproportionately for their transparency.31
Several studies (using different methods) find that audiences tend to overestimate the prevalence and autonomy of AI in journalism.32 Additional studies find audiences do not consistently or accurately interpret the intended meaning of AI attribution in bylines (e.g. “written by staff writer with artificial intelligence (AI) tool”) or simple labels.33 Several studies also suggest that people with deeper knowledge about journalism have more concerns about AI use, perhaps because they better understand verification practices.34 But offering excessive information can have negative impacts; while there is not much research on this aspect of AI transparency, a number of studies on privacy disclosure policies have explored “privacy fatigue” and disengagement.35
Almost across the board, the research we reviewed suggests that many people lack a nuanced understanding of journalistic processes and principles. And recent data CNTI collected shows that journalists themselves recognize the need to do a better job communicating the field’s value in general.36 It is nearly impossible to envision successful communication about technology tools that does not reckon with this larger challenge. After all, a major goal of journalism is to provide the public with reliable information in order to support civic participation, public life and democracy. And the norms of production that seek to make journalism reliable, such as independence and verification, also underlie journalists’ use of new technologies — these norms need to be better communicated to achieve greater levels of public trust in information.
A 2024 synthesis paper argues that briefly explaining both human and machine contributions has shown some promise.37 A recent meta-analysis found that people interpret AI-labeled content as slightly less credible but equally readable, accurate and fair.38 Perceived AI authorship was also found to have a bigger impact than actual AI authorship on most measures39 — all the more reason to provide information about the role of human journalists, highlighting the importance of verification.
Research we reviewed suggests:
- Before communicating about uses of AI, journalism outlets should consider communicating about the methods and value of journalism more broadly.
- Some methods, both human and technological, may best be explained as policies on a stand-alone page rather than repeated with every individual story or segment. For example, some frequent internal AI uses — like transcription and spellcheck — are widely accepted by audiences and may not require constant disclosure.40
- Rather than using the label “AI,” which many people interpret narrowly to refer only to generative AI, it may be clearer to answer the following three questions: What tool did I use? What did I use it to do? Do I stand by my work? In other words, it’s better to explain briefly how AI was used than to label that AI was used.41
- At least at this point in time, the process of generating or manipulating images and video should be explained at every use, because research shows there is distrust and concern about these uses.42 However, what that explanation should contain remains less clear. Providing sources for AI-generated text may help mitigate distrust.43
- Journalism organizations should collaborate to build consensus around how to best operationalize shared values such as transparency and verification.44
Where more research would be helpful:
- There has been far more research on transparency about AI use in synthetic text and images than audio — what information would be most appropriate?
- How substantive must changes to photos or video be to require explanation? Many types of photo editing have been a common practice for years and are not typically explained. Existing photo manipulation policies, many of which date back to the widespread adoption of software such as Adobe Photoshop, may be helpful in developing guidelines and thresholds for communication.
- How do people interact with transparency information in the context of existing news habits and routines?
- What is the impact of transparency strategies on people’s understanding of AI? …on people’s understanding of journalism?
- What is the relationship between reactions to transparency and (mis)trust in particular outlets and journalists?
Covering AI in journaliSM
Journalism serves an important and powerful role in explaining how new technologies work to the public, impacting people’s understanding of and attitudes towards technology.45 Recent developments in AI technologies have sparked research interest in coverage of AI, especially (1) when and how coverage has increased over time and (2) what sentiment and frames journalists are using to explain AI.46
Global Perspectives
Of the topics explored in this briefing, the research about journalism’s coverage of AI is probably the most geographically diverse. Even so, considerations around issues like sourcing are likely to vary considerably by region. For example, journalists in countries with fewer technology companies may have less access to company spokespeople — but better access to outsourced workers. Infrastructure and regulatory contexts also vary in ways that may impact what’s most relevant to a particular audience.
Working group member Zara Schroeder shares examples:
In many African countries, journalists often report on the downstream effects of global tech decisions rather than direct interactions with Big Tech firms. This shapes how sources are selected and what stories get told.
For instance, Ghanaian journalists have investigated the local impact of global content moderation work outsourced by companies like Facebook.47 In 2023, Ghanaian outlets helped amplify stories from Nairobi, Kenya, where whistleblowers exposed poor working conditions and psychological harm among content moderators employed by third-party firms on behalf of Meta.48 Without direct access to Silicon Valley executives and spokespeople, reporters in Ghana and across the region instead sourced their stories from affected workers, labor rights advocates and leaked internal communications demonstrating how sourcing adapts when direct access is limited.
In Nigeria, journalists have covered growing concerns over AI-powered surveillance technologies, such as facial recognition systems used by law enforcement.49 With little transparency from tech vendors or government bodies, reporters often rely on investigative methods like freedom of information requests, leaked documents or interviews with civil society watchdogs.50 A notable case involved uncovering the quiet deployment of Chinese-made surveillance systems in public infrastructure. Here, sourcing is shaped more by investigative persistence and NGO collaboration than corporate access.
These examples demonstrate that sourcing practices are nuanced globally, and that concerns about overreliance on sources that are not independent may be less salient in lower-access environments. Instead, sourcing in these environments is often more grassroots, labor-intensive and dependent on alternative networks of information. Understanding these differences is critical for accurately interpreting journalism produced in diverse media ecosystems.
We looked at two dozen studies which examine (1) the tone of AI coverage, (2) the language journalists use to frame AI (and potential benefits and risks), (3) interviews with journalists and (4) surveys and experiments. These studies reveal important findings but they are rarely conclusive due to gaps in the types of news organizations studied (large vs. small), the geographic contexts included and the unique research methods used. Thus, we need more data and research to form a comprehensive understanding of AI coverage in contexts around the world.
When covering AI topics — including technology developments and regulations — several researchers find that coverage may not include a full range of sources and perspectives. This can pose challenges for covering AI because journalists may (1) rely too heavily on sources that are have a financial interest in the outcomes,51 (2) lack the technical expertise needed to verify claims or ask important questions of first-person reports52 and thus (3) be overly swayed by the positions or frames offered by their sources. Research also finds that coverage of AI follows one of two overarching narratives:53 (1) AI-in-general, which orients towards expected future implications, treats AI as inevitable and foregrounds economic competition;54 and (2) AI-in-particular, which emphasizes specific, current uses of the technology, foregrounds impacts to particular communities and stakeholders, and highlights both continuity and change. That is, stories that focus on concrete technologies tend to take a more measured and balanced approach.
Researchers have also explored whether coverage of AI portrays the technology positively or negatively (or both, depending on topic). These studies examine the sentiment and/or frames used to explain the technology to audiences. Findings to date suggest that the topic of the story matters a great deal as to whether the take is positive or negative.55 For example, coverage of AI may lean positive regarding healthcare, economic topics and innovation,56 neutral-to-negative in the context of political topics57 and negative when discussing topics like data bias and cyber crime.58 As AI systems continue to develop, it is important to know both how journalists are explaining these developments and also how the topic of the story impacts how journalists cover it..
The differences described above are likely due to unique research methods and settings. Researchers are exploring how AI is covered (1) in different countries (i.e., information environments), (2) by different news organizations, (3) across different time periods (e.g., 2010-2021, 2021-2024, etc.), (4) with different measures for sentiment, topics and framing and (5) using unique keywords (e.g., “artificial intelligence” and “big data” or “artificial intelligence,” “robots/robotics,” “algorithms” and “automation”) to select articles for inclusion in the study.
Another factor that can shape AI coverage is the ideological make-up of a news organization’s audience or overt ideology of the news organization itself. Indeed, one study found that organizations with either left or right ideological biases (based on ratings from Media Bias/Fact Check,59 an independent effort in which fact checkers grade organizations) report on AI risks more than center/low ideological bias organizations.60 Yet, it remains to be seen if these findings will be corroborated by further research.
Most studies consistently find that newsroom coverage of AI dramatically increased during the mid-2010s.61 Much of this research, however, does not include coverage after late 2022, when ChatGPT was released to the public. Recent analyses, though, find steady increases in the number of articles about AI through 2023.62 Coverage of AI remains a major area of focus in newsrooms and forthcoming research will shed light on how the amount of coverage has changed over time.
Research to date has explored a range of research questions — including the tone of AI coverage, the frames used to explain AI technologies and differences in AI coverage over time — but research does not necessarily yield clear assessments of the quality or thoroughness of AI coverage (especially outside of English-speaking locations). While there are some valuable learnings from the research to date, it is critical that we (1) have more research and (2) consistently apply that research to understanding AI coverage at a deeper level.
Research we reviewed suggests:
- Journalism should continue to track not just how much they cover AI but how they are covering it. To do so, newsrooms need to have a deep understanding of the technology itself and the range of stakeholders involved. Journalists can then ask deeper questions, seek input from a range of voices and monitor their own framing more effectively. It is critical that they are intentional and aware of the frames they use because the public learns about technologies from this coverage.63
- Coverage of AI should consistently include a comprehensive selection of perspectives, including affected workers, cross-industry experts, non-industry technical experts, members of the general public and critics — in addition to technology company and government sources — to ensure the public has access to a full range of information.64
- News organizations should prioritize coverage of specific, current uses, which can help audiences form a clearer understanding of what the technologies can and can’t do. However, broad, future-focused reporting that treats the technology as inevitable may lead to hype and unrealistic expectations of both benefits and harms.65
- As AI technologies continue to develop and improve, journalists and newsrooms should continue to clarify that AI systems (1) are built and maintained by humans and (2) do not have the ability to think, understand, talk, etc. the way humans do.66
Where more research would be helpful:
- We need to continue building knowledge — across a variety of settings — about how specific topics, sources and perspectives relate to how AI is covered. While there has been a lot of research in this area, there are few conclusive findings that can be applied globally.
- Learning how audiences around the world react to different coverage language and frames — likely through experimental designs67 — is important for understanding how coverage affects audiences’ beliefs about AI.
- Most research has examined national-level news coverage, whereas regional and local news organizations have received much less attention. How does coverage of AI differ when looking at local and regional news organizations?
- There are also gaps when it comes to geographic location. Research to date has focused on the U.S. and Western Europe. Does coverage elsewhere show similar patterns? What is unique to each region of the world?
- Similarly, most of the research uses English-language (Anglophone) news coverage. Further expanding research to non-English sources will provide a more comprehensive understanding of AI coverage.
- What constitutes ‘comprehensive’ or ‘high-quality’ reporting on technologies and their political and economic contexts?
Current Working Group Members
A list of current working group members and their affiliations is shown here:
- Akintunde Babatunde
Executive Director, Centre for Journalism Innovation and Development - Jay Barchas-Lichtenstein
Senior Research Manager, Center for News, Technology & Innovation - Madhav Chinnappa
Independent Media Consultant - Utsav Gandhi
Research Intern, Center for News, Technology & Innovation; PhD Student, University of Illinois Chicago - Samuel Jens
Research Associate, Center for News, Technology & Innovation - Amy Mitchell
Executive Director, Center for News, Technology & Innovation - Sophie Morosoli
Postdoctoral Researcher at the AI, Media & Democracy Lab, University of Amsterdam - Gary Mundy
Director Research, Policy and Impact, Thomson Foundation - Oluwapelumi Oginni
Project Manager, AI Initiatives, Centre for Journalism Innovation and Development - Joshua Olufemi
Executive Director, Dataphyte Foundation - Amy Ross Arguedas
Research Fellow at the Reuters Institute for the Study of Journalism - Zara Schroeder
Researcher, Research ICT Africa - Felix M. Simon
Research Fellow in AI and News, Reuters Institute for the Study of Journalism & Research Associate, Oxford Internet Institute, University of Oxford - Scott Timcke
Senior Research Associate, Research ICT Africa - Jaemark Tordecilla
Independent Media Advisor, Philippines
References
References
Adefioye, S. (2024). The adoption of artificial intelligence into journalism practice: perspectives from the Ghanaian media industry. MediAsia Official Conference Proceedings, 405–419. https://doi.org/10.22492/issn.2186-5906.2024.33
Aikulola, S. (2025). Press freedom day: UN, IPC, MRA call for responsible use of AI. https://guardian.ng/features/media/press-freedom-day-un-ipc-mra-call-for-responsible-use-of-ai/
Alayande, A & Olufemi, J. (2023). Generating buzz: How can local newsrooms in Africa use generative AI to stay ahead of the curve? Research ICT Africa. https://researchictafrica.net/2023/06/27/generating-buzz-how-can-local-newsrooms-in-africa-use-generative-ai-to-stay-ahead-of-the-curve/
Allaham, M., Kieslich, K., & Diakopoulos, N. (2025). Global perspectives of AI risks and harms: analyzing the negative impacts of AI technologies as prioritized by news media. Preprint. https://arxiv.org/abs/2501.14040
Altay, S., & Gilardi, F. (2024). People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation. PNAS Nexus, 3(10), 403.
Ananny, M. (2024). Making generative artificial intelligence a public problem: seeing publics and sociotechnical problem-making in three scenes of AI failure. Javnost – The Public, 31(1), 89–105.
Anderson, A. A., Scheufele, D. A., Brossard, D., & Corley, E. A. (2012). The role of media and deference to scientific authority in cultivating trust in sources of information about emerging technologies. International Journal of Public Opinion Research, 24(2), 225–237.
Associated Press. (2024). The Associated Press stylebook, 2024-2026. Basic Books.
Barchas-Lichtenstein, J., Mitchell, A., Wright, E., LeCompte, C., Jens, S., & Beed, N. (2025). What it means to do journalism in the age of AI: journalist views on safety, technology and government. Center for News, Technology & Innovation. https://cnti.org/2024-journalist-survey/
Bartholomew, J. & Mehta, D. (2023). How the media is covering ChatGPT. Columbia Journalism Review. https://www.cjr.org/tow_center/media-coverage-chatgpt.php
Beckett, C. & Yaseen, M. (2023). Generating change: a global survey of what news organisations are doing with ai. JournalismAI. https://www.journalismai.info/research/2023-generating-change
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021). Managing artificial intelligence. Management Information Systems Quarterly, 45(3), 1433–1450.
Biagini, G. (2025). Towards an AI-literate future: A systematic literature review exploring education, ethics, and applications. International Journal of Artificial Intelligence in Education.
Bien-Aimé, S., Wu, M., Appelman, A., & Jia, H. (2025). Who wrote it? News readers’ sensemaking of AI/human bylines. Communication Reports, 38(1), 46–58.
Brantner, C., & Saurwein, F. (2021). Covering technology risks and responsibility: automation, artificial intelligence, robotics, and algorithms in the media. International Journal of Communication, 15(0), Article 0.
Brause, S. R., Zeng, J., Schäfer, M. S., & Katzenbach, C. (2023). Chapter 24: Media representations of artificial intelligence: surveying the field. In S. Lindgren (Ed.), Handbook of Critical Studies of Artificial Intelligence. Edward Elgar Publishing.
Brennen, J. S., Howard, P. N., & Nielsen, R. K. (2022). What to expect when you’re expecting robots: futures, expectations, and pseudo-artificial general intelligence in U.K. news. Journalism, 23(1), 22–38.
Bunz, M., & Braghieri, M. (2022). The AI doctor will see you now: Assessing the framing of AI in news coverage. AI & SOCIETY, 37(1), 9–22.
The Cable. (2023a). How digital surveillance threatens press freedom in Nigeria, West African countries. https://www.thecable.ng/how-digital-surveillance-threatens-press-freedom-in-west-africa/
The Cable. (2023b). Heightened surveillance by security operatives puts Nigerian journalists under climate of fear. https://www.thecable.ng/special-report-heightened-surveillance-by-security-operatives-puts-nigerian-journalists-under-climate-of-fear/
Canavilhas, J., & Essenfelder, R. (2022). Apocalypse or redemption: How the Portuguese media cover artificial intelligence. In J. Vázquez-Herrero, A. Silva-Rodríguez, M.-C. Negreira-Rey, C. Toural-Bran, & X. López-García (Eds.). Total journalism: Models, techniques and challenges (pp. 255–270). Springer International Publishing.
Cheong, I., Guo, A., Lee, M., Liao, Z., Kadoma, K., Go, D., Chang, J. C., Henderson, P., Naaman, M., & Zhang, A. X. (2025). Penalizing transparency? How AI disclosure and author demographics shape human and AI judgments about writing (No. arXiv:2507.01418). arXiv. https://doi.org/10.48550/arXiv.2507.01418
Choi, S. (2024). Temporal framing in balanced news coverage of artificial intelligence and public attitudes. Mass Communication and Society, 27(2), 384–405.
Cools, H., B. Van Gorp, and M. Opgenhaffen. (2022). Where exactly between utopia and dystopia? A framing analysis of AI and automation in U.S. newspapers. Journalism, 25(1), 3–21.
Cools, H., de Vreese, C., El Ali, A., Helberger, N., Prajod, P., Mattis, N., Morosoli, S., Naudts, L., & Weikmann, T. (2025, April 22). Tackling the transparency puzzle: Five perspectives from AI disclosure research in news. Medium. https://generative-ai-newsroom.com/tackling-the-transparency-puzzle-0969b3bcc489
Egwu, P. & Saint, E. (2024). From debunking disinformation to turning datasets into stories, AI is changing newsrooms in Nigeria. International Journalists Network. https://ijnet.org/en/story/debunking-disinformation-turning-datasets-stories-ai-changing-newsrooms-nigeria
Epstein, Z., Fang, M., Arechar, A., & Rand, D. (2023). What label should be applied to content produced by generative AI? Preprint. https://osf.io/v4mfz
Fletcher, R. & Kleis Nielsen, R. (2024). What does the public in six countries think of generative AI in news? Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/what-does-public-six-countries-think-generative-ai-news
Frau-Meigs, D. (2024). Algorithm Literacy as a subset of media and information literacy: Competences and design considerations. Digital, 4(2), 512–28.
Gagrčin, E., Naab, T. K., and Grub, M.F. (2024). Algorithmic media use and algorithm literacy: An integrative literature review. New Media & Society.
Gilardi, F., Lorenzo, S. D., Ezzaini, J., Santa, B., Streiff, B., Zurfluh, E., & Hoes, E. (2025). Willingness to read AI-generated news is not driven by their perceived quality. Preprint. https://arxiv.org/abs/2409.03500
Gondwe, G. (2025). Perceptions of AI-driven news among contemporary audiences: A study of trust, engagement, and impact. AI & SOCIETY. doi.org/10.1007/s00146-025-02294-x
Hall, R. & Wilmot, C. (2025a). Meta faces Ghana lawsuits over impact of extreme content on moderators. The Guardian. https://www.theguardian.com/technology/2025/apr/27/meta-faces-ghana-lawsuits-over-impact-of-extreme-content-on-moderators
Hall, R. & Wilmot. C. (2025b). ‘I didn’t eat or sleep’: a Meta moderator on his breakdown after seeing beheadings and child abuse. The Guardian. https://www.theguardian.com/technology/2025/apr/27/meta-moderator-on-the-cost-of-viewing-beheadings-child-abuse-and-suicide
Hokkanen, J. (2025). AI adoption in South African newsrooms: Exploring journalists’ perceptions. Journalism Research News. https://journalismresearchnews.org/article-ai-adoption-in-south-african-newsrooms-exploring-journalists-perceptions/
Höppner, S. (2025). Africa’s content moderators want compensation for job trauma. DW. https://www.dw.com/en/africas-content-moderators-want-compensation-for-job-trauma/a-72401025
Institute of Development Studies. (2023). Nigeria spending billions of dollars on harmful surveillance of citizens. https://www.ids.ac.uk/press-releases/nigeria-spending-billions-of-dollars-on-harmful-surveillance-of-citizens/
Ishengoma, D. J., & Magolanga, E. M. (2025). Adaptations of artificial intelligence (AI) in Tanzanian newsrooms: Opportunities, benefits and threats to journalism professionals. Journal of Applied Journalism; Media Studies. https://doi.org/10.1386/ajms_00178_1
Jamil, S. (2020). Artificial intelligence and journalistic practice: The crossroads of obstacles and opportunities for the Pakistani journalists. Journalism Practice, 15(10), 1400–1422. https://doi.org/10.1080/17512786.2020.1788412
Ji, X., Kuai, J., & Zamith, R. (2024). Scrutinizing algorithms: Assessing journalistic role performance in Chinese news media’s coverage of artificial intelligence. Journalism Practice, 18(9), 2396–2413.
Jia, H., Appelman, A., Wu, M., & Bien-Aimé, S. (2024). News bylines and perceived AI authorship: Effects on source and message credibility. Computers in Human Behavior: Artificial Humans, 2(2), 100093.
Kabir, A. & Adebajo, A. (2023). How digital surveillance threatens press freedom in West Africa. HumAngle. https://humanglemedia.com/how-digital-surveillance-threatens-press-freedom-in-west-africa/
Korneeva, E., Salge, T. O., Teubner, T., & Antons, D. (2023). Tracing the legitimacy of artificial intelligence: A longitudinal analysis of media discourse. Technological Forecasting and Social Change, 192, 122467.
Köstler, L., & Ossewaarde, R. (2022). The making of AI society: AI futures frames in German political and media discourses. AI & Society, 37(1), 249–263.
Kuai, J. (2025). Navigating the AI hype: Chinese journalists’ algorithmic imaginaries and role perceptions in reporting emerging technologies. Digital Journalism, 1–20.
Kumar, A., & Sangwan, S. R. (2024). Conceptualizing AI literacy: Educational and policy initiatives for a future-ready society. International Journal of All Research Education and Scientific Methods, 12(4).
Lammar, D., Horst , M., & Müller, R. (2025). AI in the German media: Narratives of AI-in-Particular and AI-in-General in German media reporting about artificial intelligence. Digital Journalism, 1–19.
LeCompte, C., Mitchell, A., & Jens, S. (2024). Focus group insights #2: perceptions of artificial intelligence use in news and journalism. Center for News, Technology & Innovation. https://innovating.news/article/focus-group-insights-2-perceptions-of-artificial-intelligence-use-in-news-and-journalism/
Lush, D. (2022). How African newsrooms are using AI to analyse data and produce good journalism. IMS. https://www.mediasupport.org/blogpost/how-african-newsrooms-are-using-ai-to-analyse-data-and-produce-good-journalism/
Lyu, T., Guo, Y., & Chen, H. (2024). Understanding the privacy protection disengagement behaviour of contactless digital service users: The roles of privacy fatigue and privacy literacy. Behaviour & Information Technology, 43(10), 2007–2023.
Magalhães, J. C., & Smit, R. (2025). Less hype, more drama: open-ended technological inevitability in journalistic discourses about AI in the US, the Netherlands, and Brazil. Digital Journalism, 1–18. https://doi.org/10.1080/21670811.2025.2522281
Malik, N. (2025). The decline of print advertising and its impact on journalism. International Journal of Advance Research and Innovative Ideas in Education, 11(1). https://ijariie.com/AdminUploadPdf/The_Decline_of_Print_Advertising_and_its_Impact_on_Journalism_ijariie25794.pdf
Mari, W. (2024). The pre-history of news-industry discourse around artificial intelligence. Emerging Media, 2(3), 499–522.
Mattis, N. M., Kieslich, K., & de Vreese, C. (2025). Feeling iffy about generative AI: When journalists disclose AI use, trust in news is lower. Preprint. https://osf.io/preprints/osf/tmzq4
Media Bias/Fact Check. (2025). Methodology. Media Bias/Fact Check. https://mediabiasfactcheck.com/methodology/
Mills, K., Ruiz, P., Lee, K., Coenraad, M., Fusco, J., Roschelle, J., & Weisgrau, J. (2024). AI literacy: a framework to understand, evaluate, and use emerging technology. Digital Promise.
Misri, A., Blanchett, N., & Lindgren, A. (n.d.). “there’s a rule book in my head”: Journalism ethics meet AI in the newsroom. Digital Journalism, 1–19.
Mitchell, A., Jens, S., Moon Sehat, C., LeCompte, C., Beed, N., Wright, E. and Barchas-Lichtenstein, J. (2025). What the public wants from journalism in the age of AI: a four country survey. Center for News, Technology & Innovation. https://innovating.news/2024-public-survey/
Mohammed, A., Elega, A. A., Ahmad, M. B., & Oloyede, F. (2024). Friends or foes? Exploring the framing of artificial intelligence innovations in Africa-focused journalism. Journalism and Media, 5(4), Article 4.
Montag, C., Nakov, P., & Ali, R. (2024). On the need to develop nuanced measures assessing attitudes towards AI and AI literacy in representative large-scale samples. AI & Society 40, 1129-1130.
Moran, R. E., & Shaikh, S. J. (2022). Robots in the news and newsrooms: unpacking meta-journalistic discourse on the use of artificial intelligence in journalism. Digital Journalism, 10(10), 1756–1774.
Moriniello, F., Martí-Testón, A., Muñoz, A., Silva Jasaui, D., Gracia, L., & Solanes, J. E. (2024). Exploring the relationship between the coverage of AI in WIRED Magazine and public opinion using sentiment analysis. Applied Sciences, 14(5), Article 5.
Morosoli, S., Naudts, L., Cools, H., Venkatraj, K. P., Helberger, N., & Vreese, C. de. (n.d.). Individuals’ need for rights and their sense of accountability connected to AI disclosures. Qualitative evidence from group interviews. Preprint. http://osf.io/768zc
Mugadzaweta, M.S. (2025). The adoption of AI in Zimbabwe’s newsrooms: A case of Zimpapers and Alpha Media Holdings. (Unpublished master’s thesis). Aga Khan University. https://ecommons.aku.edu/cgi/viewcontent.cgi?article=1016&context=etd_ke_gsmc_ma-digjour
Mukasa, R. (2024). Examining the role of artificial intelligence (AI) in transforming print journalism in Uganda. (Unpublished master’s dissertation). Aga Khan University, East Africa. Retrieved from https://ecommons.aku.edu/theses_dissertations/2322/
Munoriyarwa, A., Chiumbu, S., & Motsaathebe, G. (2021). Artificial intelligence practices in everyday news production: the case of South Africa’s mainstream newsrooms. Journalism Practice, 17(7), 1374–1392. https://doi.org/10.1080/17512786.2021.1984976
Ncube, L., Mofokeng, R. W., Chibuwe, A., Munoriyarwa, A., & Murangi, A. K. (2025). ‘Mind the gap’: artificial intelligence and journalism training in Southern African journalism schools. Media Practice and Education, 0(0), 1–17. https://doi.org/10.1080/25741136.2025.2464483
Nguyen, D. (2023). How news media frame data risks in their coverage of big data and AI. Internet Policy Review, 12(2).
Nguyen, D., & Hekman, E. (2024). The news framing of artificial intelligence: A critical exploration of how media discourses make sense of automation. AI & SOCIETY, 39(2), 437–451.
Oeldorf-Hirsch, A., & Neubaum, G. (2025). What do we know about algorithmic literacy? The status quo and a research agenda for a growing field. New Media & Society, 27(2), 681-701.
Olanipekun, S. O., & Olakoyenikan, O. (2022). Ethical implications of generative AI in journalism: Balancing innovation, truth, and public communication trust. World Journal of Advanced Research and Reviews, 16(3), 1293–1311. https://doi.org/10.30574/wjarr.2022.16.3.1159
Olawuyi, E. A., & Enuwah, J. (2025). Framing the future: Media narratives on artificial intelligence and its societal impact. International Journal of Current Research in the Humanities, 28(1), 269–290. https://doi.org/10.4314/ijcrh.v28i1.19
Parratt-Fernández, S., Chaparro-Domínguez, M.-Á., & Martín-Sánchez, I.-M. (2024). Spanish media coverage of journalistic artificial intelligence: Relevance, topics and framing. Revista Mediterránea de Comunicación, e25169–e25169.
Piasecki, S., Morosoli, S., Helberger, N., & Naudts, L. (2024). AI-generated journalism: Do the transparency provisions in the AI Act give news readers what they hope for? Internet Policy Review, 13(4).
Pinski, M., & Benlian, A. (2024). AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects. Computers in Human Behavior: Artificial Humans, 2(1), 100062.
Radcliffe, D. (2025). Journalism in the AI era: Opportunities and challenges in the Global South and emerging economies. Thomson Reuters Foundation.
Ross Arguedas, A. (2024). OK computer? Understanding public attitudes towards the uses of generative AI in news. Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/news/ok-computer-understanding-public-attitudes-towards-uses-generative-ai-news
Ruiz, P., & Glazer, K. (2024). Anthropomorphism of AI in learning environments: Risks of humanizing the machine. EdSurge. https://www.edsurge.com/news/2024-01-15-anthropomorphism-of-ai-in-learning-environments-risks-of-humanizing-the-machine
Sánchez-García, P., Diez-Gracia, A., Mayorga, I. R., & Jerónimo, P. (2025). Media self-regulation in the use of AI: limitation of multimodal generative content and ethical commitments to transparency and verification. Journalism and Media, 6(1), Article 1.
Schell, K. (2024). AI transparency in journalism: Labels for a hybrid era. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2025-01/RISJ%20Fellows%20Paper_Katja%20Schell_MT24_Final.pdf
Schüller, K. (2022). Data and AI literacy for everyone. Statistical Journal of the IAOS, 38(2), 477–490.
Sofiullahi, A. (2024). How journalism groups in Africa are building AI tools to aid investigations and fact-checking. Global Investigative Journalism Network. https://gijn.org/stories/africa-journalism-building-ai-investigations-fact-checking/
Solomons, S., & Ndlovu, M. W. (2024). AI adoption in South African newsrooms: exploring journalists’ perceptions. Communication, 50(2), 122–143. https://doi.org/10.1080/02500167.2024.2439971
Tadimalla, S. Y., & Maher, M. L. (2024). AI literacy for all: adjustable interdisciplinary socio-technical curriculum. 2024 IEEE Frontiers in Education Conference (FIE), 1–9. https://doi.org/10.1109/FIE61694.2024.10893159.
Takahashi, B., & Tandoc Jr., E. C. (2016). Media sources, credibility, and perceptions of science: learning about how people learn about science. Public Understanding of Science, 25(6), 674–690.
Thomson, T. J., Thomas, R. J., & Matich, P. (n.d.). Generative visual AI in news organizations: challenges, opportunities, perceptions, and policies. Digital Journalism, 0(0), 1–22. https://doi.org/10.1080/21670811.2024.2331769
Toff, B., & Simon, F. M. (2024). “Or they could just not use it?”: The dilemma of ai disclosure for audience trust in news. The International Journal of Press/Politics.
Umeora, C. C. (2025). Artificial intelligence and journalistic practices in Nigeria: navigating awareness, adoption, and structural challenges. Multidisciplinary Research and Development Journals Int’l, 7(1), 136–152. https://mdrdji.org/index.php/mdj/article/view/125
Valderrama Barragán, M., Tironi, M., Cotoras, D., Correa, T., Humeres, M., & López, C. (2025). From industry hype to emerging criticism: analysing Chilean news media coverage of artificial intelligence. Digital Journalism, 1–23.
van der Schyff, K., Foster, G., Renaud, K., & Flowerday, S. (2023). Online Privacy Fatigue: A scoping review and research agenda. Future Internet, 15(5), Article 5. https://doi.org/10.3390/fi15050164
Wang, C., Sturgis, P., & Kadt, D. de. (2025). AI labeling reduces the perceived accuracy of online content but has limited broader effects (No. arXiv:2506.16202). arXiv. https://doi.org/10.48550/arXiv.2506.16202
Wang, S., & Huang, G. (2024). The impact of machine authorship on news audience perceptions: a meta-analysis of experimental studies. Communication Research, 51(7), 815–842.
Wittenberg, C., Epstein, Z., Berinsky, A. J., & Rand, D. G. (2024). Labeling AI-generated content: promises, perils, and future directions. an MIT exploration of generative AI. Preprint. https://mit-genai.pubpub.org/pub/hu71se89/release/1
Yang, S., Krause, N. M., Bao, L., Calice, M. N., Newman, T. P., Scheufele, D. A., Xenos, M. A., & Brossard, D. (2023). In AI we trust: the interplay of media use, political ideology, and trust in shaping emerging ai attitudes. Journalism & Mass Communication Quarterly, 102(2), 382–406.
Yeste-Piquer, E., Suau-Martínez, J., Sintes-Olivella, M., & Xicoy-Comas, E. (2025). What if I prefer robot journalists? Trust and objectivity in the AI news ecosystem. Journalism and Media, 6(2), Article 2.
Zhu, H., & Zhang, M. (2024). “I never read it, but I always accept it”: unravelling social, individual, and policy design-induced influences on privacy policy acceptance. ICIS 2024 Proceedings. https://aisel.aisnet.org/icis2024/security/security/5
Appendix
The studies and their research methods for the “Communicating about AI use in journalism” section are presented below.
Authors | Research Methodology; Data Sources |
Altay & Gilardi, 2024 | Experiment; non-probability U.K. and U.S. samples |
Barchas-Lichtenstein et al., 2025 | Survey; international journalists |
Beckett & Yaseen, 2023 | Survey; international journalists |
Bien-Aimé et al., 2025 | Experiment; non-probability U.S. sample |
Cheong et al., 2025 | Experiment; non-probability U.S. sample |
Epstein et al., 2023 | Experiment; Brazil, China, India, Mexico and U.S. samples |
Fletcher & Kleis Nielsen, 2024 | Survey; Argentina, Denmark, France, Japan, U.K. and U.S. samples |
Gilardi et al. 2025 | Experiment; Swiss sample |
Gondwe, 2025 | Survey, non-probability sample from 10 African countries |
Jia et al., 2024 | Experiment; non-probability U.S. sample |
LeCompte et al., 2025 | Focus groups; participants in Australia, Brazil, South Africa and U.S. |
Mattis et al., 2025 | Experiment; Dutch sample |
Misri et al., n.d. | Interviews; journalists in Canada |
Mitchell et al., 2025 | Survey; Australia, Brazil, South Africa & U.S. samples |
Morosoli et al., n.d. | Group interviews; participants in the Netherlands |
Piasecki et al., 2024 | Experiment; Dutch sample |
Ross Arguedas, 2024 | Deliberative methodology; participants from Mexico, U.K. and U.S. |
Sánchez-García et al., 2025 | Interviews; participants in Mexico, U.K. and U.S. |
Schell, 2024 | Synthesis |
Thomson et al., 2024 | Interviews; photo editors in Australia, France, Germany, Norway, Switzerland, U.K. and U.S. |
Toff & Simon, 2024 | Experiment; non-probability U.S. sample |
Wang & Huang, 2024 | Meta-analysis; 30 experimental studies |
Wang et al., 2025 | Experiment; U.K. sample |
The studies for the “Covering AI in journalism” section, including time frames and data sources, are provided in the table below.
Authors | Time Frame | Data Sources |
Allaham et al., 2025 | 2022–2024 | National news domains across 27 countries |
Ananny, 2024 | N/A | Synthesis |
Bartholomew & Mehta, 2023 | 2022–2023 | Media Cloud database; Internet TV News Archive |
Brantner & Saurwein, 2021 | 1991–2018 | Austrian Media Corpus |
Brause et al., 2023 | 2017–2022 | Articles from SCOPUS/Web of Science |
Brennen et al., 2022 | 2018 | U.K.: The Guardian, HuffPost, The Telegraph, The Daily Mail, MailOnline, WIRED U.K. and the BBC |
Bunz & Braghieri, 2022 | 1980–2019 | The Wall Street Journal, The Daily Telegraph and The Guardian |
Canavilhas & Essenfelder, 2022 | 2020 | Portugal: 5 leading national newspapers |
Choi, 2024 | 2022 | Survey experiment |
Cools et al., 2022 | 1985–2020 | The New York Times and The Washington Post |
Ji et al., 2024 | 2019–2023 | China: text analysis of journalistic investigations |
Korneeva et al., 2023 | 1980–2020 | The New York Times, The Times, The Guardian and the Financial Times |
Köstler & Ossewaarde, 2022 | 2018–2019 | Germany: Government policy documents and Die Welt, Die Tageszeitung, Frankfurter Allgemeine Zeitung and Die Zeit |
Kuai, 2025 | 2024 | China: In-depth interviews with journalists |
Lammar et al., 2025 | 2019–2022 | Germany: Bild, Frankfurter Allgemeine Zeitung, Süddeutsche Zeitung, Die Welt and Die Zeit |
Li & Long, 2025 | 2023–2024 | U.S.: Text analysis and semi-structured interviews |
Magalhães & Smit, 2025 | 2020–2023 | The New York Times, De Volkskrant, and Folha de S.Paulo |
Mohammed et al., 2024 | 2021–2024 | African continent: English-language outlets |
Moran & Shaikh, 2022 | 2016–2020 | U.S. and U.K. news outlets |
Moriniello et al., 2024 | 2018–2023 | WIRED |
Nguyen & Hekman, 2024 | 2010–2021 | The New York Times, The Guardian, WIRED and Gizmodo |
Parratt-Fernández et al., 2024 | 2010–2023 | Spanish outlets |
Tandoc et al. 2025 | 2001–2023 | Singapore: The Straits Times, Channel News Asia, Today Online |
Valderamma et al., 2025 | 2008–2023 | Chile: Diario Financiero, El Mercurio, La Cuarta and La Tercera |
Wang & Downey, 2025 | 2022–2023 | Outlets in China, India, U.K. and U.S. |
Footnotes
- Mari, 2024 ↩︎
- Biagini, 2025; Frau-Meigs, 2024; Gagrčin et al., 2024; Kumar & Sangwan, 2024; Mills et al., 2024; Oeldorf-Hirsch & Neubaum, 2025; Pinski & Benlian, 2024; Schüller, 2022; Tadimalla & Maher, 2024 ↩︎
- Biagini, 2025; Gagrčin et al., 2024; Mills et al., 2024; Pinski & Benlian, 2024 ↩︎
- Anderson et al., 2012; Takahashi & Tandoc, 2016; Yang et al., 2023 ↩︎
- Jamil, 2021; Malik, 2025; Munoriyarwa et al., 2023 ↩︎
- Adefioye, 2024; Ishengoma & Magolanga, 2025; Mukasa, 2024; Umeora, 2025 ↩︎
- Mills et al., 2024 ↩︎
- Pinski & Benlian, 2024; see also Berente et al., 2021 ↩︎
- Gagrčin et al., 2024 ↩︎
- Associated Press, 2024; Mills et al., 2024 ↩︎
- Ruiz & Glazer, 2004 ↩︎
- Biagini, 2025 ↩︎
- Pinski & Benlian, 2024 ↩︎
- Montag et al., 2024 ↩︎
- Beckett & Yaseen, 2023; Fletcher & Kleis Nielsen, 2024; Gondwe, 2025; LeCompte et al., 2024; Piasecki et al., 2024; Ross Arguedas, 2024; Toff & Simon, 2024 ↩︎
- Barchas-Lichtenstein et al., 2025; Radcliffe, 2025 ↩︎
- Radcliffe, 2025 ↩︎
- Ncube et al., 2025 ↩︎
- Radcliffe, 2025 ↩︎
- Munoriyarwa, 2024 ↩︎
- Olanipekun & Olakoyenikan, 2022 ↩︎
- Alayande & Olufemi, 2023; Sofiullahi, 2024 ↩︎
- Mugadzaweta, 2025; Lush, 2022; Mukasa, 2024; Hokkanen, 2025 ↩︎
- Egwu & Saint, 2024 ↩︎
- Egwu & Saint, 2024 ↩︎
- Egwu & Saint, 2024 ↩︎
- Egwu & Saint, 2024 ↩︎
- Altay & Gilardi, 2024 ↩︎
- Toff & Simon, 2024 ↩︎
- Wang et al., 2025 ↩︎
- Cheong et al., 2025 ↩︎
- Altay & Gilardi, 2024; Fletcher & Kleis Nielson, 2024; Ross Arguedas, 2024 ↩︎
- Bien-Aimé et al., 2025; Jia et al., 2024 ↩︎
- Mattis et al., 2025; Toff & Simon, 2024 ↩︎
- Lyu et al., 2024; van der Schyff et al., 2023; Zhu & Zhang, 2024 ↩︎
- Barchas-Lichtenstein et al., 2025 ↩︎
- Schell, 2024 ↩︎
- Wang & Huang, 2024 ↩︎
- Gilardi et al., 2025; Jia et al., 2024 ↩︎
- Fletcher & Kleis Nielsen, 2024; Mitchell et al., 2025 ↩︎
- Bien-Aimé et al., 2025; Jia et al., 2024; Wang & Huang, 2024 ↩︎
- Fletcher & Kleis Nielsen, 2024; Mitchell et al., 2025; Thomson et al., 2024 ↩︎
- Toff & Simon, 2024 ↩︎
- Misri et al., n.d.; Morosoli et al., n.d.; Sánchez García et al., 2025 ↩︎
- Anderson et al., 2012; Takahashi & Tandoc, 2016; Yang et al., 2023 ↩︎
- Annany, 2024; Brause et al., 2023; Choi, 2024; Moran & Shaikh, 2022; Parratt-Fernández et al., 2024 ↩︎
- Hall & Wilmot, 2025a ↩︎
- Hall & Wilmot, 2025b; Höppner, 2025 ↩︎
- Aikulola, 2025; Institute of Development Studies, 2023 ↩︎
- The Cable, 2023a; 2023b; Kabir & Adebajo, 2023 ↩︎
- Bunz & Braghieri, 2022; Canavilhas & Essenfelder, 2022 ↩︎
- Brennen et al. 2022; Ji et al., 2024; Kuai, 2025 ↩︎
- Lammar et al., 2025 ↩︎
- See also Magalhães & Smit, 2025 ↩︎
- Brantner & Saurwein, 2021; Canavilhas & Essenfelder, 2022 ↩︎
- Mohammed et al., 2024; Moriniello et al., 2024 ↩︎
- Canavilhas & Essenfelder, 2022 ↩︎
- Nguyen & Hekman, 2024 ↩︎
- Media Bias/Fact Check, 2025 ↩︎
- Allaham et al., 2025 ↩︎
- Brantner & Saurwein, 2021; Cools et al., 2022; Korneeva et al., 2023; Nguyen, 2023; Nguyen & Hekman, 2024; Valderrama et al., 2025 ↩︎
- Bartholomew & Mehta, 2023; Valderrama et al., 2025 ↩︎
- Allaham et al., 2025; Brantner & Saurwein, 2001; Choi, 2024; Korneeva et al., 2023; Nguyen, 2023; Nguyen & Hekman, 2024; Parratt-Fernández et al., 2024; Radcliffe, 2025 ↩︎
- Ananny, 2024; Brennen et al., 2022; Canavilhas & Essenfelder, 2022; Mohammed et al., 2024 ↩︎
- Lammar et al., 2025; Magalhães & Smit, 2025 ↩︎
- Associated Press, 2024 ↩︎
- Epstein et al., 2023 ↩︎