Topic
AI & Journalism

AI provides new reporting, investigative and operational tools for journalism. It also exacerbates challenges like disinformation, copyright abuse and job stability. CNTI’s work examines both the opportunities and challenges AI brings to the field in order to shape newsroom leaders and policymakers’ understanding.
Share
Latest
Focus Areas
-
Artificial Intelligence in Journalism
—
How do we enable the benefits and manage the harms of artificial intelligence in journalism?
The impact of AI is massive and widespread, and the landscape is changing rapidly. We are already seeing impacts for news reporting, circulation and consumption. Taking advantage of the benefits while limiting the harms will require a careful balancing act in our deeply complex information environment. For newsrooms, the use of generative AI tools offers benefits for productivity and innovation. At the same time, it risks inaccuracies, ethical issues and undermining public trust.
Policy deliberation: Legislation will need to offer clear and consistent definitions of AI categories, grapple with the repercussions of AI-generated content for copyright and civil liberties and offer accountability for violations. The Artificial Intelligence and Data Act (AIDA)’s failure to pass in Canada also suggests that deliberations will need to include avenues for meaningful public participation.
Public understanding: As newsrooms implement AI, they need to remember that while communicating about how they are using AI is important, transparency alone is not enough. The public largely lacks a nuanced understanding of journalistic practices and they need that context to make sense of AI. That means transparency initiatives must be broader than initially conceived and include information about human journalists’ work.
Governance: Without policy guidance, technology companies’ own decisions will continue to dictate how AI is developed, implemented and used. Further downstream, publishers will also be responsible for establishing transparent, ethical guidelines for and education about AI use. Forward-thinking collaboration among policymakers, publishers, technology developers and other stakeholders is critical to strike the right balance and support public access to information.
Explore Focus Area AI In Journalism
-
Enhancing Algorithmic Transparency
—
How can public policy enhance algorithmic transparency and accountability while protecting against political or commercial manipulation?
Digital platforms have become central to how people around the world find and share news and information. Currently, each platform operates under its own rules and conventions, including what content is shown and prioritized by algorithmic infrastructures and internal company policies. Establishing legal and organizational policy to promote algorithmic transparency is one critical step toward a structure that allows for more accountability for often-opaque digital platform practices related to content selection and moderation processes. Various stakeholders – including policymakers, researchers, journalists and the public – also often have different purposes and uses for transparency, which need to be thought through to be sure it serves those needs while protecting against the risks of political or commercial manipulation. These considerations also carry through to who is designated to regulate transparency requirements.
Explore Focus Area Enhancing Algorithmic Transparency
-
Synthetic Media & Deepfakes
—
How do we protect societies from synthetic media and “deepfakes”?
Deepfakes, a form of synthetic media content that uses artificial intelligence (AI) to create realistic depictions of people and events, have proliferated in recent years. There are many questions about how this content affects journalists, fact-based news and mis- and disinformation. In addressing these concerns it is important to consider freedom of expression and safety. Relatedly, policies targeting deepfakes must be clear about what types of content qualify as such.
Detection technologies and provenance approaches are being rapidly developed but it is unlikely they can prevent all potential harms by AI-altered content. Additional research should consider (1) what effects deepfakes have on journalism, (2) how content labeling addresses concerns about deepfakes (and what types are most effective), (3) what international standards should be applied to content to confirm its authenticity and (4) how best to teach the public to identify synthetic media.
Explore Focus Area Synthetic Media and Deepfakes
Continue Reading
Showing 1 – 15 of 21 Posts
-

Qué significa hacer periodismo en la era de la IA: Opiniones de los periodistas sobre la seguridad, la tecnología y el Gobierno
El 50 % informa haber sufrido una extralimitación del Gobierno en el último año, en un contexto donde las tecnologías transforman los ecosistemas informativos y la libertad de prensa enfrenta crecientes amenazas legales, políticas y económicas.
-

Qué quiere el público del periodismo en la era de la IA: una encuesta en cuatro países
Tres cuartos o más de los encuestados valoran el papel del periodismo; más del 56 % dice que “la gente común” puede producir periodismo
-

Focus Group Insights #2: Perceptions of Artificial Intelligence Use in News and Journalism
Generative AI tools like ChatGPT are transforming journalism, as newsrooms adopt automation to boost efficiency, expand readership, and adapt to market pressures.
-

A Window into AI and Journalism in Africa: Perspectives from Journalists and the South African Public
African journalists and the South African public express optimism about the impacts of technology on journalism, including AI
-

Integrating Emerging Technology in Newsrooms Must Preserve Journalistic Agency
In interviews with more than 70 journalism professionals, Emilia Ruzicka found that daily journalists are being left out of conversations about the future of AI in the field.
-
CNTI’s Global AI & Journalism Research Working Group
CNTI’s The AI and Journalism Research Working Group consists of cross-industry members from around the world, bringing research, journalism and technology experience to the discussions.
-

What the Public Wants from Journalism in the Age of AI: A Four Country Survey
Three-quarters or more value journalism’s role; 56%+ say “everyday people” can produce it
-

What It Means to Do Journalism in the Age of AI: Journalist Views on Safety, Technology and Government
50% report experiencing government overreach in the last year
-

If, When and How to Communicate Journalistic Uses of AI to the Public
Conclusions of a Day-Long Discussion among Global Cross-Industry Experts
-

Watermarks are Just One of Many Tools Needed for Effective Use of AI in News
A Global Cross-Industry Group of Experts Discuss Challenges and Opportunities in an AI-Incorporated News Landscape
-

Synthetic Media & Deepfakes
How do we protect societies from synthetic media and “deepfakes”?
-
Ethical AI in Journalism: A Discerning Eye in Times of War and Elections
The rise of AI in journalism presents both innovation and disinformation challenges. Lessons from Ukraine’s war underscore the importance of ethical AI usage in conflict zones. As the 2024 elections near, safeguarding media integrity is paramount.
-

Defining AI in News
Leaders in Tech, Media, Research and Policy Lay Groundwork for Global Solutions
-

Artificial Intelligence in Journalism
How do we enable the benefits and manage the harms of artificial intelligence in journalism?
-

Enhancing Algorithmic Transparency
How can public policy enhance algorithmic transparency and accountability while protecting against political or commercial manipulation?
Explore More Topics










