Washington, DC – Countries around the world are introducing new laws and regulations to shape how artificial intelligence (AI) can be used. But until now, no one has aggregated the accumulated impacts these policies might have on journalism and the digital information environment. Today, a new report from the Center for News, Technology & Innovation (CNTI) titled, “Journalism’s New Frontier: An Analysis of Global AI Policy Proposals and Their Impacts on Journalism,” examines 188 national and regional AI strategies, laws and policies that collectively cover more than 99 countries, and breaks them down into seven policy components that are most likely to impact journalism.
The seven topic areas that CNTI’s researchers focused on were: freedom of speech and expression; manipulated or synthetic content; algorithmic discrimination and bias; intellectual property and copyright; transparency and accountability; data protection and privacy; and public information and awareness about AI.
The in-depth analysis, written by CNTI staffers Kayla Goodson, Jay Barchas-Lichtenstein, Emily Wright, Samuel Jens, and Utsav Gandhi, found that in the broad landscape of AI laws related to journalism:
- Transparency and data protection are the two topic areas (among the seven studied) that come up most frequently; Freedom of expression comes up least. Of the 188 AI strategies, laws and policies we reviewed, 124 addressed transparency and accountability of AI; 107 addressed data protection and privacy; 92 addressed algorithmic discrimination and bias; 76 addressed public information and awareness about AI; 64 addressed manipulated or synthetic content;; 49 addressed intellectual property and copyright; ; and 19 addressed freedom of speech and expression. (multiple topics can appear in any one document.) In all seven regions, either transparency or data protection was the most common topic, likely a sign of the challenge of accountability in the opaqueness of many AI systems and the weight that privacy issues have held in recent technology governance.
- While policymakers regularly express concern about AI’s impacts on the information environment, references to freedom of speech and expression are infrequent across regions. Indeed, there were no mentions of “freedom of speech” in Middle East or North African countries (consistent with their low press freedom rankings.) When freedom of speech and expression are recognized, it generally provides safeguards for journalism against government censorship, but only if methods to uphold it are clear. Several countries ban AI systems that do not respect freedom of expression, but no specificity of what that means. For example, Argentina’s Bill 2573-S-2024 prohibits the use of AI “which violates fundamental human rights such as privacy, freedom of expression, equality or human dignity.” Other countries recognize trade-offs: In order to ensure that algorithmic content recommendation does not violate freedom of expression, Ecuador’s Organic Law for the Regulation & Promotion of AI in Ecuador requires clear terms and conditions, human supervision, accountability reports and an appeals process.
- Policies don’t have to mention journalism by name to impact it. Twenty of the 188 documents explicitly mentioned “journalist,” “journalism,” “news,” “media” or “news media.” This by no means is a suggestion that more documents should directly name journalism. Once governments define “journalism” or “news,” those definitions can be weaponized against the news media. Instead, it is important that policymakers are aware and think through potential impacts both when journalism and news are and are not directly named. When these terms do appear, the orientation and degree of focus vary. For example:
- Three strategies (from Algeria, Egypt, Lesotho and Sri Lanka) emphasize the importance of news media as a communications channel for public awareness campaigns, with varying assumptions about editorial independence from government.
- Five bills and laws exempt journalism from specific provisions. China’s Interim Measures on Generative AI which say that other journalism laws supersede it; three U.S. bills would exempt news media from specific restrictions on deepfakes in the context of both reporting and advertisements; and Brazil’s proposed law grants exemptions to some forms of copyright violation for journalistic or research purposes.
- Governance of AI is increasingly complex and varies dramatically by country, and industry and by the stakeholders involved. The differences emerge in the legal authority and binding nature, scope and place alongside pre-existing laws. This level of variance makes it especially challenging to foresee how they would function together in our global digital news environment. For example, in the U.S. there has been more state-level activity that is more focused on labeling and disclosing AI-generated content which is now being challenged by the Federal government, while in Canada the Artificial Intelligence and Data Act was not enacted, leading the country to apply existing laws to AI systems. Meanwhile, the European Union’s Artificial Intelligence Act lays out strict rules that are human rights and harm-based and will directly impact the daily work of journalists. However, proposed changes are being considered that will simplify implementation of the Act.
The report also includes three recommendations for policymakers:
- In AI proposals that address manipulated content, it is important that policymakers work towards methods that protect certain journalistic uses in ways that do not enable government censorship or determination of who is or is not a journalist.
- Bias audits and transparency measures are best implemented before a tool is deployed.
- Policymakers should ensure that AI policy working groups include journalism producers, product teams and engineers, alongside AI technologists, researchers, civil society and other relevant stakeholders.
“We felt the best way to help guide policymakers and others introducing new laws and regulations was to examine what’s in place now to see how those policies are affecting journalism and the digital information space – even and primarily when policies are not directly call them,” said Amy Mitchell, the Founding Executive Director of CNTI. “We hope this research will aid technologists, journalists and policymakers in fostering new AI rules and strategies that won’t hamper or harm the work of a free press and a vibrant digital information environment.”
The full AI policy report is available here, with a complete list of and links to all of the AI policy proposals included. Please contact CNTI at press@cnti.org for more information or to schedule an interview with our researchers.
Share



