TL;DR
Table of Contents
AI policy in the region
Countries in Sub-Saharan Africa view AI technology as a way to develop their economies and contribute to the United Nations’ sustainable development goals, but they also tend to acknowledge that the benefits must be weighed against the risks AI can pose. Many countries across the region also recognize that low internet penetration, low digital literacy and limited access to computing resources can make effective AI implementation difficult. As a result, most of the reviewed proposals in the region take both an innovation-based and a harm-based approach to AI regulation. These strategies encourage their countries to become regional leaders in AI while emphasizing the need to keep humans at the center of this approach. In some circumstances, such as in Kenya, Senegal and Ethiopia, these AI proposals exist alongside other digital development strategies that aim to increase the country’s digital infrastructure and economy. Digital development strategies are not the focus of this analysis, but they are contributing to the development of the ecosystem in which AI and journalism function.
The African Union’s 2024 Continental AI Strategy is a strong representation of how the region is approaching AI and what it will mean for journalism. This strategy emphasizes AI’s impact on information integrity and media, and it encourages member states to cooperate with each other and take action to mitigate risks in these fields. This strategy was endorsed by the African Union Executive Council, but it is non-binding at the country level, leaving implementation and enforcement to national and regional regulatory bodies. Several countries’ AI strategies and policies are aligned with the African Union’s strategy, such as those of Ethiopia and Lesotho. Other countries, including Nigeria and Zambia, also take inspiration from UNESCO, the OECD and the U.S. National Institute of Standards and Technology Framework for AI Risk Management.
Between January 2022 and June 2025, 14 countries in Sub-Saharan Africa had adopted AI strategies or policies. Our review did not uncover any AI legislation adopted during this time period, but Namibia is reportedly drafting an AI bill.
By the numbers
Of the 17 proposals we reviewed in sub-Saharan Africa, three specifically mentioned journalism; none addressed freedom of speech or expression; seven addressed manipulated or synthetic content; 11 addressed algorithmic discrimination and bias; three addressed intellectual property and copyright; 11 addressed transparency and bias; 13 addressed data protection and privacy; and eight addressed public information and awareness.

Impacts on journalism and a vibrant digital information ecosystem
Each of these seven policy areas has the potential to impact journalism in a way that reverberates across the continent given the cross-border nature of AI. Proposals on public information and awareness could potentially increase journalists’ own knowledge of AI, if the proposed skills trainings include members of the press. AI-generated disinformation campaigns are having a negative impact on the information space and undermining trust in traditional news organizations; legal efforts to counter AI-manipulated content can help protect journalists, but they have also been misused to target journalists in the region.
Proposals that address algorithmic bias and discrimination tend to go hand-in-hand with the journalistic principle of objectivity, but they can also pose a challenge to newsrooms’ efforts to use AI to personalize content. Intellectual property and copyright laws across the region vary widely, and few of the reviewed proposals mention this topic at all. This poses a difficult challenge for news organizations who are losing traffic to AI tools and whose content is being scraped without remuneration.
Transparency and accountability requirements would generally positively impact reporters by providing them with information that would allow them to better understand, and thus better report on, the AI systems that are impacting their local communities. Data protection and privacy regulations are generally best practice for newsrooms, but, when misused, these regulations could be used to stifle public interest reporting. Finally, the omission of freedom of speech and expression in any of the reviewed policies is indicative of declining press freedom in many sub-Saharan African countries.
Freedom of speech or expression
AI Summary: None of the reviewed AI policies in Sub-Saharan Africa mention freedom of speech or expression. Since freedom of speech is linked to a free press, the absence of this topic, along with low freedom of expression ratings in the region, signal a negative environment for journalism.
Approaches
None of the reviewed policies specifically mention freedom of speech or expression.
Impacts on journalism
The lack of inclusion of freedom of speech and expression in the reviewed AI policies does not necessarily signify that countries in Africa do or do not support this right. Member states of the African Union could embrace the African Commission’s 2019 Declaration of Principles on Freedom of Expression and Access to Information in Africa; the principles include non-interference with freedom of opinion and the protection of journalists. The Declaration is not enforceable, but it does establish a normative framework that member states of the African Union should seek to uphold.
It is worth noting, however, that, according to Article 19’s 2025 Global Expression Report, no countries in Sub-Saharan Africa rank as “open,” and nearly a quarter of the region’s population lives in a country “in crisis.” Furthermore, attempts to regulate social media platforms in several countries in Sub-Saharan Africa, while often well intentioned, have sometimes resulted in internet blackouts and platform suspensions, which can have negative impacts on freedom of speech, as well as on journalists’ ability to share the news and audiences’ ability to access it. Because freedom of speech and expression creates the environment in which freedom of press can exist, the lack of consideration for freedom of speech paired with the low rankings across the continent can signify a negative environment for press conditions in the AI era. In fact, Reporters Without Borders notes that “press freedom is experiencing a worrying decline in many African nations.”
Manipulated or synthetic content
AI Summary: While the African Union recommends that countries address AI-driven disinformation with education and new laws, Sub-Saharan African nations have so far only taken a few, often vague, steps to do so. AI creates challenges for journalists, who are needed to check facts but must also deal with a public that has limited trust, while new laws meant to address disinformation could be used against them.
Approaches
One of the African Union’s Continental AI Strategy’s areas of focus is information integrity, media literacy and information literacy. The strategy explains that AI is magnifying mis- and disinformation, and, as such, it recommends that African countries implement media and information literacy programs in schools, train government officials on these skills, develop legal frameworks to regulate emerging technologies and develop strategies to address the risks posed by AI, such as disinformation and hate speech. So far, countries in Sub-Saharan Africa have only lightly touched on these recommendations in their proposals.
- Implementing media literacy programs. Kenya’s National AI Strategy is the only reviewed proposal that follows up on the African Union’s recommendations by including an objective to “launch a public awareness campaign on AI rights, disinformation, misinformation, protection and safe development while showcasing the benefits of AI.”
- Using AI to detect mis- and disinformation. Both Ethiopia’s National AI Policy and Mauritania’s National AI Strategy 2025-2029 note that AI can be used to help detect and counter foreign propaganda and disinformation campaigns. Neither details how this would be achieved.
- Acknowledging the problem without further context. Ghana’s National AI Strategy and Nigeria’s draft National AI Strategy explain that AI-generated content can be used to spread disinformation and manipulate citizens.
Impacts on journalism
Journalists play an important role in public education about the risks of AI, as well as in fact-checking AI-generated claims and content, such as deepfakes. However, as explained in Kenya’s National AI Strategy, the rise of AI-enabled false narratives can further undermine trust in media and institutions, meaning there might not be a trusting audience receptive to reporters’ fact-checking efforts.
Legal frameworks can help regulate the use of AI to avoid the spread of misleading, manipulated content, but these frameworks are still evolving across the region, leaving gaps in mitigating these challenges. However, because we have seen “fake news laws” misused to target journalists in the region, such as in Ethiopia, it is important that any AI-related legislation attempting to address disinformation also include provisions to protect journalists and their ability to report.
Algorithmic discrimination and bias
AI summary: Sub-Saharan African countries are prioritizing AI strategies that focus on addressing bias and making AI systems more inclusive by pushing for the development of AI models in local languages and by prioritizing the representation of diverse populations. To avoid undermining public trust, journalists must be aware of biases in the AI systems they use, while also navigating new regulations that may complicate AI use for personalizing content.
Approaches
AI strategies in the region pay significant attention to algorithmic discrimination and bias. This emphasis is unsurprising given that the majority of existing AI models are trained primarily on English-language data from Global North contexts, while an estimated 1,500 to 3,000 languages are spoken across the region, meaning current, popular AI models may not match users’ experiences or meet their needs in the region.
The African Union’s Continental Strategy sets the stage for regional attempts to address algorithmic discrimination and bias with its focus on building safe, inclusive AI systems by supporting the development of AI in local languages and developing and implementing AI that is inclusive and beneficial, with an emphasis on reaching women and girls and vulnerable populations. Proposals across Sub-Saharan Africa follow suit and generally advocate for three approaches.
- Developing AI in local languages and with an emphasis on local, cultural contexts. For example, Côte d’Ivoire’s National Strategy on AI calls for the development of an AI large language model in local languages to better promote and preserve local traditions and knowledge. Kenya’s National AI Strategy is guided by a principle of cultural preservation and contextualization, which states that “AI systems will be developed that are enriched with Kenyan cultural values and that preserve and promote the nation’s cultural heritage and ensure contextual relevance to local needs and contexts.” Kenya’s strategy further explains that to accomplish this, the country is encouraging research aimed at developing AI systems that can interact in various local languages, which, according to the strategy, would democratize access to AI, make AI more relevant to the Kenyan population and contribute to the preservation of linguistic diversity. Similarly, Nigeria’s National AI Strategy includes an objective on driving locally-led AI innovation to “replicate Nigeria’s social context and cultural diversity with AI tools and solutions” to drive accessibility across sectors.
- Prioritizing inclusion at all stages of the AI lifecycle to ensure AI benefits underrepresented and vulnerable populations. Côte d’Ivoire’s National Strategy on AI includes a strategic principle on societal inclusion and equity, which lays out the need to ensure AI is accessible across geographic and economic boundaries and to all people — especially women, youth and vulnerable groups. Nigeria’s National AI Strategy notes that it is essential that AI innovation is accessible and leaves no one behind. It continues to encourage Nigerians to “actively promote diversity and representation in AI research, development, and deployment” and ensure “that the benefits of AI innovation are shared equitably among all members of society, including marginalised and vulnerable populations.” Ghana’s National AI Strategy notes the need to improve internet and digital infrastructure, particularly in rural areas, to create an environment for inclusive AI in the country.
- Calling for algorithmic audits and other tests to identify and correct bias in AI systems. Lesotho’s draft Artificial Intelligence Policy and Implementation Plan cites the need to “develop initiatives to identify and reduce biases in AI models, particularly to prevent discrimination and promote equity,” including by mandating regular bias audits that would presumably be reported to the overseeing government entity. South Africa’s National AI Policy Framework also calls for the development of methods to identify and mitigate bias in AI systems, but it does not specify what these methods would be. Côte d’Ivoire’s National Strategy on AI calls for the imposition of algorithmic audits on AI systems to ensure they are not biased; although, how these audits would be reviewed or by whom is not specified. It also encourages diversity in the datasets used to train AI to prevent bias.
Impacts on journalism
News organizations can play an important role in the development of localized AI models if they so choose. Because media outlets across the region often operate in local languages — sometimes in more than one local language — they have considerable amounts of video, audio and text content that can be used to train localized AI models. However, if news content is used to train AI models, it should be with the original copyright holder’s consent, and forms of remuneration and citation should be considered.
Journalists and newsrooms throughout the region will have to be cognizant of the biases in the AI systems they develop and use — both for legal reasons and to avoid exacerbating or creating societal tensions. This also aligns with journalism standards since bias undermines the journalistic principle of objectivity. If journalists ignore these biases, they risk undermining public trust in both their reporting and in the technology they use.
On the other hand, national regulations can pose a challenge to newsrooms’ efforts to use more sensitive data in AI to personalize content if the laws do not include an exception for journalistic uses. News outlets will have to walk a fine line between personalization and bias mitigation. To explain how they do this, newsrooms could consider publishing their own AI ethics policies, which should outline how they are using AI technology, mitigating biases and discrimination, and implementing human oversight of all AI-produced content.
Intellectual property and copyright
AI summary: Ghana and Nigeria are the only two countries in Sub-Saharan Africa that have so far focused on using existing intellectual property laws to protect the work of AI developers and promote innovation. The lack of clear copyright laws across the region makes it difficult to enforce rules for AI-generated content and makes news organizations’ remuneration efforts more complicated.
Approaches
While the African Union’s Continental AI Strategy notes that intellectual property regimes are essential to nurturing AI start-ups, so far only two countries in Sub-Saharan Africa emphasize intellectual property and copyright in their own AI strategies. Both countries take a similar approach.
- Clarifying how existing laws apply to AI. Ghana’s National AI Strategy recommends that the government “review and clarify laws for copyright, patents and intellectual property.” Nigeria’s draft National AI Strategy notes that applying the country’s existing intellectual property laws, including the Copyright Act and the Trademarks Act, to AI is “critical to promoting innovation and protecting the intellectual property rights of developers.”
Impacts on journalism
Copyright laws vary widely across the region, with some laws still focused on the analog era, while others have been updated for the digital era. Furthermore, many countries across the region face difficulties with piracy and the enforcement of intellectual property laws. The cross-border nature of AI poses a difficult challenge for the region; even if one country comes to a copyright decision about a piece of content created with AI or the use of content to train AI, another country might disagree. As countries continue to develop and update intellectual property laws, they should also consider responsibility: When a journalist, or anyone else, uses AI to create content, who owns the copyright of that content, and who is responsible for its potential impacts?
News organizations in several countries around the world, including the United States, Canada and Japan, have sued AI companies over the unauthorized use of their content. News organizations in the United States, Germany and France, among others, have also signed deals with AI companies, allowing the companies to use news content to respond to user queries (with a link back to the news site). As of yet, no news organizations in sub-Saharan Africa have sued or signed deals with AI companies, although South Africa has proposed mandatory content remuneration rules. Experts posit that AI companies are not approaching news outlets in Sub-Saharan Africa for many reasons, one of which is the lack of clear copyright regimes. Stronger regimes, they say, could provide protection for media houses and force AI companies to the table. Furthermore, news outlets that operate in local languages could provide significant swathes of training content for companies building localized AI models — although, these local companies should still seek permission and establish deals with news companies. These kinds of remuneration deals could be an important financial lifeline for journalists in the region whose work is being scraped by AI tools that are not under a legal obligation to link back to the original news source, thus diverting traffic from the original outlet.
Transparency and accountability
AI Summary: In their national strategies, several Sub-Saharan African countries are pushing for transparency and accountability in AI by creating clear rules for its use and holding developers responsible for how their AI systems work. The new rules would help journalists better report on AI, but they would also require newsrooms to be more transparent about how they use AI and would hold them responsible for the results.
Approaches
Several strategies in Sub-Saharan Africa are guided by the ideals of transparency and accountability, noting that they are essential for building public trust in AI. Country strategies recommend increasing transparency and accountability in three main ways.
- Establishing clear national standards and governance principles for AI. Benin’s National Artificial Intelligence and Big Data Strategy recommends updating the country’s Code of Digital Affairs to “formalize and institute impact analyses and monitoring of AI solutions throughout their lifecycle.” Rwanda’s National AI Policy states, “Trust is critical to public confidence and acceptance of AI. By strengthening the capacity of regulatory authorities to understand and regulate AI aligned with emerging global standards and best practices, we will build transparency and trust with the public.”
- Requiring developers to explain how AI systems arrive at their outputs. Both South Africa’s National AI Policy Framework and Nigeria’s National AI Strategy prioritize clarity in the design, development, deployment and decision-making of AI systems. This means developers have to explain how the systems work and what kinds of purposes and biases they hold. These principles would also allow developers and deployers to be held responsible for AI’s ethical use. Cote d’Ivoire’s National AI Strategy goes a step further by providing users with a “right to recourse” when they do not agree with an AI model’s decision.
- Requiring AI’s decisions to be replicable and auditable. One way to do this, according to Nigeria’s National AI Strategy, is to create an open data initiative to foster collaboration between public and private sectors.
Impacts on journalism
Transparency and accountability in AI systems would generally positively impact the field of journalism. Transparency requirements that necessitate explainable AI would allow journalists to better understand, and thus better report on, the AI systems that are impacting their local communities. By pushing for explainable AI, these same provisions would also benefit the general information ecosystem by increasing the trustworthiness of AI responses; explainable AI prioritizes prediction accuracy, traceability and decision understanding after the results are computed, meaning that users will better understand how an AI system arrived at a result.
Newsrooms would also be responsible for meeting these transparency standards. As newsrooms develop and deploy AI, they would be expected to be able to explain exactly how the technology works and when, how and why they are using it. They would also be required to provide human oversight over content produced by AI. Depending on the governance principles in each country, newsrooms may reasonably be held liable for any decisions made by the AI technologies they deploy.
Data protection and privacy
AI summary: Some Sub-Saharan African countries are addressing data privacy and protection for AI systems by either using current data laws or creating new ones to fit AI’s unique features. Newsrooms need to follow these data privacy rules to build public trust, but policymakers should make sure the laws do not prevent journalists from carrying out public interest reporting.
Approaches
The African Union’s Continental AI Strategy calls on data and computing platforms to develop policies that facilitate sharing of non-personal data, and it recommends regional governments establish frameworks and protocols that align with the African Union’s Data Policy Framework. The Continental AI Strategy also cites the African Union’s Convention on Cybersecurity and Personal Data Protection, or the Malabo Convention, which was adopted in 2014 and entered into effect in 2023, as an important guiding document. A handful of AI strategies in the region acknowledge the importance of data protection and privacy in related, albeit slightly different, ways.
- Applying existing legislation to AI. Nigeria’s National AI Strategy highlights the country’s 2023 Data Protection Act (DPA). The strategy notes that even though the DPA does not specifically address AI, it is still applicable to AI data concepts. For example, AI systems must adhere to the DPA’s principles for data minimization and purpose limitation, and they must protect sensitive data and comply with transparency requirements.
- Creating new legal frameworks for AI. Lesotho’s draft Artificial Intelligence Policy and Implementation Plan recommends that the government adopt a comprehensive data protection law, which, the plan states, could align with the EU’s GDPR. Kenya’s National AI Strategy also has an objective to create a robust data governance framework, which would include a specific data policy and establish an AI taskforce within the proposed Data Governance Office Coordination Committee.
- Combining the two approaches. Zambia’s National AI Strategy 2024-2026 explains that the country has already established the legal framework to safeguard personal data through its data privacy law. It further acknowledges that Zambia has established the office of the Data Commissioner to ensure compliance with the law and corresponding regulations. The strategy then notes that this legal framework could be evolved to “be fit for purpose” to address AI standards. Mauritania’s National AI Strategy calls on the government to implement policies to comply with data protection regulations. Similarly, South Africa, Benin and Ghana highlight their existing governance structures but recognize that these laws should be strengthened to account for AI.
Impacts on journalism
The Malabo Convention is legally binding in the 16 African Union member states that have ratified it; thus, it provides meaningful insight into how countries across sub-Saharan Africa and North Africa are seeking to handle data protection across borders. However, because it took so long to come into effect, it does not account for AI, and many countries have chosen to adopt more up-to-date data protection laws. Even these more updated laws do not always account for AI, so we chose to focus our analysis on AI-specific strategies, policies and laws.
Journalists are already well trained on how to protect the privacy of their sources. As such, complying with new and existing data privacy legislation should, generally, fall within existing newsroom practices. Newsrooms should comply with local data privacy legislation and general best practices to ensure data privacy is built into any AI system they use or deploy. In many cases, newsrooms are choosing to publish their data privacy and AI policies publicly to build transparency and increase public trust.
AI also poses new challenges to journalist and source privacy that are not currently addressed in the region’s approaches. AI can be used to surveil and target journalists and their sources. AI tools can also be used to identify individuals in anonymized data sets, to uncover anonymous authors or sources through language processing and to create a composite image from the blurred face of a source. As policymakers consider how best to apply existing legislation to AI and craft new legislation, they should consider how to protect the data of journalists and their sources from potential harm enabled by AI.
As in other regions, data protection laws can be misused to stifle public interest reporting; as such, policymakers should consider the impact these laws can have on the news industry and consider exceptions within the laws to protect journalists’ ability to carry out investigations.
Public information and awareness
AI Summary: Several African countries are creating plans to increase public understanding of AI through public education campaigns, identifying key people in AI management, and teaching the public and workforce new AI-related skills. These plans can also help journalists by improving their digital skills so they can better understand and report on AI.
Approaches
- Conducting public education campaigns. Kenya’s National Artificial Intelligence (AI) Strategy 2025-2030 highlights a flagship project to launch a public awareness campaign on “AI rights, disinformation, misinformation, protection and safe development while showcasing the benefits of AI” with the goal of increasing the public’s foundational awareness of AI. South Africa’s National AI Policy Framework similarly calls for public awareness campaigns to “educate the public on AI technologies and their implications.” Zambia’s National AI Strategy 2024-2026 lays out a plan to launch AI literacy campaigns to “promote understanding of AI technologies, benefits, risks, and ethical considerations among the public.” Senegal’s National Strategy for the Development of Artificial Intelligence calls on the government to partner with AI technologists, local authorities and civil society to train the public on ways they can use AI to meet local needs.
- Mapping out stakeholders in AI governance. Lesotho’s draft Artificial Intelligence Policy and Implementation Plan lists out the roles of stakeholders in AI governance, including those of the government, regulatory authorities, citizens and the media. The media, it notes, is “crucial in shaping public discourse, promoting transparency, and holding stakeholders accountable.” Under this proposal, the media would be responsible for educating the public about AI and its implications, investigating and reporting on potential misuses or ethical breaches in AI systems, and facilitating dialogue between stakeholders and the public.
- Training the workforce on AI-related skills. Côte d’Ivoire’s National Strategy on AI calls on the government to carry out national awareness campaigns to explain the benefits of AI to the public servants and explain how they can use the technology to improve public services. In its plan to raise public awareness of AI, Kenya’s National Artificial Intelligence (AI) Strategy 2025-2030 also highlights the need to educate government employees on “ethical, equitable and inclusive AI.” Finally, Nigeria’s draft National AI Strategy includes an objective to “develop a talent pipeline with the necessary knowledge and skills” to increase AI adoption across the country. This includes a plan to develop and implement AI skills development programs across different sectors.
Impacts on journalism
Journalists play an important role in educating the public about new technologies and their role in society. Lesotho’s policy is the clearest in the region regarding journalists’ role in public education about AI. While other countries may intend a role for journalists in their public awareness campaigns, they do not specify one — perhaps this is a nod to journalistic independence, or perhaps it signifies a lack of awareness of journalism’s role.
Training the public and the workforce on AI could increase journalists’ own digital literacy if they are included in these programs. This would enable journalists both to effectively implement AI in their own work and to better understand the technology they are expected to report on.
Conclusion
As countries throughout Sub-Saharan Africa continue to develop and implement AI policies, strategies and regulations, it is important that policymakers think through their impacts on journalism. AI’s impact on journalism does not just affect journalists; it affects the whole of society by either aiding or hindering journalists’ ability to share news with their communities.
As journalists ramp up use of AI for their own work, newsrooms must ensure that reporters have a full understanding of how the technology works, including the benefits and harms posed by AI, to ensure their reporting remains factual. Journalists must also consider how best to explain their use of AI to the public in order to maintain trust. Furthermore, it will be up to journalists to investigate and report on governments’ and companies’ use of AI to uncover their impacts on society, thus making journalists’ AI literacy even more essential. Finally, as AI proposals are implemented across the region, journalists can determine how best to leverage the technology to better deliver on their mission and to use it as a source of revenue.
Share



