Journalists & Online Abuse

How can we better protect the press from online harassment and abuse?

Editorial-style illustration of a woman working on a laptop with a large digital interface behind her, symbolizing online activity and safety. Small figures around the scene represent the broader public.

TL;DR

Scope and severity of online abuse Most journalists report experiencing some form of online harassment or abuse. This includes threats, gendered or sexualized harassment, doxxing, hate speech, or coordinated harassment (“mob censorship”) campaigns. The harms are especially acute for women and journalists from minority or marginalized backgrounds.

Real consequences for journalists and journalism Online abuse causes psychological distress, fear, anxiety, and sometimes leads journalists to self-censor or change how they report or what topics they cover. In some cases, people leave the profession because the abuse becomes unsustainable.

Gaps in support from newsrooms and platforms Many news organizations lack adequate policies, protocols, or resources to protect journalists facing online abuse. Journalists often feel they must handle abuse alone. Reporting, documenting, and responding to abuse tends to fall short.

Complex legal and regulatory landscape Laws addressing online harassment and abuse exist in many places, but protections specifically targeting journalists are limited. Enforcement is difficult, especially across borders, for content from fake or anonymous accounts, and for material that is spread across platforms. There is also tension between regulating abuse and preserving freedom of speech.

Possible mitigation strategies and areas needing more research Protection strategies include improving newsroom policies, peer support among journalists, digital tools for filtering or reporting abusive content, better transparency from platforms, and legislative reform. But more evidence is needed on what interventions work best, particularly for vulnerable groups, in different cultural and legal contexts.

Table of Contents

CNTI’s Assessment

Journalists are increasingly facing online abuse, a serious threat that can lead to the mental and physical harm of journalists and undermine the integrity of the information ecosystem and democratic values. While it is unlikely we can eliminate this problem, a multi-faceted approach can significantly mitigate the issue and better protect the press. 

Policy deliberation: Creating effective policies requires a balance. While the goal of new legislation is to protect individuals from online harassment, it must also respect fundamental rights like freedom of speech. Journalists should be consulted on cyberbullying and online content moderation policies. Developing and implementing these frameworks is a critical step in providing journalists with stronger laws to defend themselves against abuse. 

Professional support: News organizations can play a vital role in preparing journalists to recognize, prevent and respond to online through several means, including psychological support.

Governance: Moderating abusive content carries many complexities due to the high number of actors committing online abuse and the existing protections for freedom of expression in many countries. It will be important for platforms to moderate content without deleting lawful content. Additionally, content reporting mechanisms must be user-friendly, and abusive content must be addressed in a timely manner.

The Issue

Technology has created new opportunities for journalism, making it easier for diverse voices to be heard and for journalists to connect with the public. However, these advancements have brought new challenges, including a rise in online harassment and abuse towards journalists and news organizations. Some harassment campaigns are led by individuals; others are orchestrated or endorsed by governments

Online harassment is generally understood as using technology to bully, threaten or aggressively target someone. It can cause journalists significant psychological and emotional distress and can be linked to real-life violence. Such abuse can also lead journalists to self-censor their work or distance themselves from their audience to avoid further abuse. One in three journalists in CNTI’s recent survey regularly face serious risks, including online abuse. Other data show that online harassment disproportionately affects women journalists and journalists of color. 

The increases in online abuse are prompting responses from legislative bodies, tech companies and civil organizations. Some countries, like Australia and the United Kingdom, have passed legislation to address online safety by defining abusive content and requiring more oversight of online service providers. In the U.S., legislation has been repeatedly introduced to modify Section 230 of the Communications Decency Act, which currently protects online platforms from liability for user-generated content. Finding the appropriate level of content moderation is difficult. While some argue for stricter content moderation to curb abuse, others worry this could threaten freedom of expression. 

To help with this issue, tech companies and civil service organizations are offering journalists tools like AI software to filter harmful content, cyber toolkits and field manuals to improve online safety. 

Despite these efforts, there is a consensus that these non-legislative actions and tools are insufficient. Journalists facing high risks report insufficient protections and feel that their newsrooms are not supportive of their needs. This has led many to deal with online abuse on their own. While methods like peer support have shown promise in mitigating harm and building resilience, further action is necessary to protect journalists from online abuse and to support those who have experienced it. 

What Makes It Complex

Any policy or standard for moderating content and mitigating online harassment must also protect the freedom of expression and free speech. Moderating online abuse is a significant challenge due to the tension between free speech protections and the difficulty in defining where expression becomes harassment. This becomes more complex with the different types of digital communication, whether the communication is public or private, and if moderation occurs before or after an individual receives it.  

Countries and regions like Australia, the United Kingdom, and the European Union have begun to formalize content moderation requirements and methods, with many of the implementation decisions resting with social media companies, though some laws provide detail about online providers’ reporting requirements and the authority of various government agencies to handle online abuse.  With declining press freedoms and increasing government censorship globally, there is concern about the potential for state actors to exploit content moderation for censorship.  The balance of decision-making power is both complicated and important: too much power in the hands of any single entity, whether a technology company, government entity or other powerful institution, could be used for information control if critical safeguards and checks on power are not in place.

Technology companies have weakened their content moderation protocols and teams, presenting novel challenges and concerns. Recent layoffs across the technology industry have significantly impacted teams dedicated to combating online abuse and disinformation. Throughout 2025, platforms, including Meta and TikTok, have continued to cut staff and programs in this space..

These staff cuts in trust and safety programs coincide with an increase in online abuse, misinformation,and harmful content. While data about the effectiveness of trust and safety programs is limited, their goal of preventing abuse remains important, though clearer articulation of these goals and how they balance freedom of expression may be needed.

While not aimed directly at journalists, these trends are taking place as legislative efforts worldwide are pushing for stricter online safety regulations. Tech companies now face substantial civil penalties for non-compliance with laws like Australia’s Online Safety Act and the EU’s Digital Services Act, highlighting the critical role trust and safety teams play in protecting individuals and journalists from online harm.

The online space has enabled and encouraged journalists to engage and make direct connections with the public which, while valuable, also increases the potential for online abuse. The digital environment allows for and encourages “reciprocal journalism” where journalists and the public have the opportunity to learn from each other, which leads to increased trust and engagement in the news. Journalists provide factual information to the public and individuals share their experiences, thoughts and reactions with journalists.

At the same time, this closer connection creates more access points for potential online abuse. The rise of online harassment has significantly influenced journalists’ perceptions of their audiences. Those subjected to high levels of abuse tend to express negative sentiments towards their readership, adopting defensive mental barriers and emotional boundaries. As a result, they may withdraw from online platforms and self-censor their content.

In this delicate balance, the promise of reciprocal journalism can be overshadowed by the threat of online abuse, undermining the symbiotic relationship between journalists and their audiences. This situation is exacerbated by the disconnect between those making the policies on online abuse and those experiencing the abuse. 

Newsrooms do not always provide support to journalists experiencing harassment, creating mistrust within newsrooms. Financial and organizational culture often prevent newsrooms from providing support, such as mental healthcare coverage, to employees. As a result, journalists feel like they must deal with online abuse alone, and many have voiced a lack of trust in their newsrooms’ capacity and ability to assist them. 

News organizations can support journalists by developing comprehensive policies and protocols about online abuse and safety. For example, a common framework for reporting, recording and reviewing online abuse can lead to understanding the efforts used to target  journalists in a news organization.

Another way to aid journalists experiencing online harassment and abuse is through peer support. While it does not prevent online abuse, encouraging journalists to connect and discuss their shared experiences can alleviate some of the detrimental effects of online abuse.

End-to-end encryption complicates the government and technology companies’ shared responsibility for online content moderation.  End-to-end encryption (E2EE) services, such as Signal, Telegram and WhatsApp, can help shield journalists from obtrusive surveillance, and many journalists frequently use these services to communicate with each other and with sensitive sources. Different countries’ policies, such as the TAKE IT DOWN ACT in the U.S.,  are made to address online abuse but threaten the integrity of E2EE services.  

E2EE moderation is also complicated because globally, laws regarding online privacy are a patchwork. Current legal frameworks make content moderation across various geographic contexts convoluted. Governments and law enforcement organizations have even pressured technology companies to provide access to private E2EE messages. To address legitimate concerns without overmoderating legal content, technology companies and governments will need to find an appropriate balance that protects individual privacy online, including E2EE, while also mitigating online abuse.

State of Research

Journalists around the world are facing a growing tide of online abuse, and research reveals important trends and impacts. While any journalist can be a target, studies show that women and journalists of color are disproportionately affected, experiencing more intense and overt harassment. This abuse has serious consequences, including psychological distress, self-censorship and leaving the industry, threatening the existence of a free, independent and diverse press. 

Research shows the need for more support from news organizations. Many journalists feel their employers expect them to be active on social media platforms but do not provide adequate policies or support to protect them from harassment. This can lead to a perception that the organization prioritizes its reputation over the well-being of its staff. 

The research also shows that these attacks are often organized campaigns of “mob censorship.” In these scenarios, political leaders or other public figures make disparaging remarks about the press, which then incites groups of followers to target individual journalists with the goal of silencing them. 

To combat this, there are several promising interventions. Newsrooms can develop clear policies and protocols to address online abuse, provide support services for their staff and freelance journalists, and educate themselves about these coordinated campaigns. Peer support groups have emerged as an effective tool, offering a sense of community and shared understanding for journalists who are experiencing similar challenges. 

Continued research is needed to explore how technology can be used to mitigate abuse, what other support systems can be implemented, and how technology companies can be more transparent and accountable.

Notable Studies

State of Legislation

A 2023 report by the World Bank reveals that out of the 190 countries examined, 58 had cyber harassment laws, and 22 of those 58 had legislation related to cyber sexual harassment. These are significant gaps in protection from online abuse for journalists. While some of these laws can cover journalists by extension, there are very few with a specific focus on online abuse of journalists. 

Countries have also been developing comprehensive legislation regarding online safety at large. This type of legislation is broader than cyberbullying legislation in that it focuses on many online harms (e.g., exploitation, data privacy and protection, terrorism). The European Union’s Digital Services Act (passed 2022, enacted 2024) and the United Kingdom’s Online Safety Act (2023) both have provisions for reducing online harms. Australia and the United Kingdom signed a joint memorandum (2024) to collaborate on creating a safer online experience for all users. 

The United Kingdom’s Online Safety Act and the U.S.’ TAKE IT DOWN Act (2025) both impact end-to-end encryption (E2EE) of online services. The United Kingdom’s act has provisions for online service providers to monitor end-to-end encrypted services for online abuse (e.g., child exploitation, terrorism). The U.S.’ act does not provide exceptions for E2EE services in its mandates for online services to remove flagged content within 48 hours or to identify and remove known copies. These types of provisions have led to concerns about government overreach and an invasion of privacy. 

Beyond legal protections, technology companies and social media platforms are also offering tools to journalists to limit online abuse such as filters to decrease the likelihood of encountering abusive content and a streamlined process of reporting abusive content to platforms. PEN America’s report about how to fix reporting processes on online platforms provides potential revisions/updates that can increase the effectiveness of reporting abusive online content.

Notable Legislation