AI for Sustainability: Building Journalism’s Future

Innovations and insights from across the Western Balkans and Central Europe

white man standing. two white women sitting, one holding microphone.
Left to right: Craig Forman, Niamh Burns and Veronika Munk (Photo: Kayla Goodson)

This is a joint publication from the Center for News, Technology and Innovation and Thomson Media.

On October 28 and 29, 2025, the Center for News, Technology and Innovation and Thomson Media brought together more than 35 journalists, product specialists and newsroom leaders from the Western Balkans and Central Europe in Sarajevo, Bosnia and Herzegovina, for “Journalism and AI: Building Resilient Newsrooms for the Future” — the second collaboration between the organizations. Over the course of two days, participants shared their struggles and successes integrating AI into their work, took master classes on AI prompt engineering and contemplated the question, “How do we harness AI’s power to build a strong future for our work?”

The answer that emerged was both multifaceted and promising. The journalists in this region are leaning in and figuring out what works best and what does not work for them and their audiences, and what they can learn from each other — and they often are managing with only small budgets and staff sizes.  

At one point during the workshop, CNTI Chair Craig Forman explained, “I wrote for the Wall Street Journal when I was a foreign correspondent, 30-ish years ago on the eve of the breakup of Yugoslavia.” He continued, “I bring that up, not to bring any ill will or bad memories, but to say that when I [was reporting in the region], it was unthinkable that we might be here today. Unthinkable. And as you all know, we have to think about the unthinkable, not only in the bad way, but in the good way.”

Craig Forman (Photo: Kayla Goodson)

Forman’s words resonated deeply in a city once synonymous with war reporting but now hosting a discussion about how technology could safeguard truth and foster resilience in newsrooms.

AI Will Not Take Journalists’ Jobs, But Someone Who Uses AI Will

David Caswell, drawing on years of experience developing AI storytelling systems, kicked off the workshop with a powerful and straightforward insight: “We should take AI seriously because the trajectory of improvements has been radical.” He continued, “The effect of [AI investment] will be significant.”

This isn’t about jumping on every technological bandwagon or tacking the latest tool onto one’s current product. It’s about recognizing that structured storytelling models and AI-assisted workflows represent a practical evolution in how newsrooms operate. The question isn’t whether AI will change journalism, but whether journalism will adapt intelligently.

From left to right: David Caswell, Amy Mitchell and Marius Dragomir (Photo: Kayla Goodson)

As Niamh Burns from Enders Analysis said, “AI will undoubtedly change how news is created, distributed, noticed and funded. This is really a moment where you [journalists] should all be thinking about what your value add is.”

These aren’t abstract problems for newsrooms in Sarajevo, Belgrade or Pristina. They’re immediate challenges that require practical solutions.

Sami Kçiku, a project manager at the independent news company Koha Group in Kosovo, shared that his newsroom was initially fearful of AI. Several other participants voiced similar stories of journalists worried they would be replaced by AI and, therefore, reluctant to try out new technology, with generational divides also at play. 

Kçiku offered the simple yet disarming advice that he gave to his team: “You won’t be replaced by AI, but you will be replaced by someone who uses AI.” Koha has since implemented a custom GPT to improve SEO and social media practices, and they use an external AI tool for transcription.

Journalists Must Begin to See Themselves as Innovators

As newsrooms attempt to address the tension between embracing AI and fearing it, Caswell recommended a calm but intentional approach of “innovation, adoption, diffusion.” 

It is important that innovation comes from collaboration between the editorial and product teams, and, as Burns pointed out, innovation should prioritize measurable success instead of bandwagon adoption. Nikola Bačić, editor-in-chief at Hercegovina Info, shared that his team has one AI meeting every week, where the editorial staff meets with the IT staff to discuss needs and potential solutions. 

Once a tool is created, newsrooms must take the time to properly train the entire staff on a technology to ensure they have the skills needed to adopt the AI tool, if they so choose. Tatjana Sekulic, an executive multimedia producer at N1 in Bosnia and Herzegovina, told the group that uptake at her outlet is mixed; some journalists are completely against AI, while others are overly reliant on it. As a result, the newsroom has implemented training for everyone to ensure they all have the skills to use AI responsibly. 

“It’s very important to teach them how to use AI in the proper way,” Sekulic said. “We’re investing in our knowledge and our people.”

Tatjana Sekulic (Photo: Kayla Goodson)

Finally, following implementation, it is important to continue conversations to diffuse broader adoption of the technology. Veronika Munk, director of innovations at Denník N in Slovakia, explained that while full newsroom training sessions did not always achieve the desired impact for her team, they still provided useful insights. Over time, the organization found that complementing these larger trainings with a more targeted approach worked better. When Denník N introduces a new tool, they now focus on training smaller groups within the team or on one-on-one micro-trainings. These trained editors and reporters can then gradually share their knowledge with other colleagues. She also emphasized that the organization’s use of AI always involves human oversight.

While participants responded to this discussion with enthusiasm, some relayed concerns about the cost of implementing AI tools in newsrooms with already thin budgets. 

Damjan Dano, a tech entrepreneur from North Macedonia, explained that creating customized AI tools is not the only option; instead, there are a plethora of existing, inexpensive AI tools that newsrooms can use to support their work. Dano led participants through an exercise where they laid out their AI wish lists, and he shared a variety of existing tools that could be a solution to some of their needs. Frase, for example, can help with SEO optimization, and Asana AI can help manage newsroom workflows.  

“AI is a great tool, not your substitution,” he reminded participants. “Your job is safe, but you have to use the tools that exist today.”

From left to right: Kaja Puto and Damjan Dano (Photo: Kayla Goodson)

These discussions came with a crucial caveat: Journalism’s future depends on “adaptability, but not at the expense of ethics,” Caro Kriel, chief executive of Thomson Foundation, said.

Ethics and Audience Relations Must Be a Core Part of Newsrooms’ AI Strategies

Another key theme over the two days of discussions was that AI should augment human judgment, not replace it. Newsrooms need to ensure their use of AI is driven by the values and ethics they espouse. It should enable and empower journalists to do their best work, to report relevant, important news and to deliver it to audiences effectively.

Marius Dragomir, director of the Media and Journalism Research Center, reminded participants that technology questions are never just technical. They’re about values, accountability and the social contract between journalists and their audiences. Every AI implementation carries an ethical weight that newsrooms must acknowledge and address.

“The trust of our audience is our highest value. To preserve this trust, we must be honest and transparent,” Vesna Ivanovska-Ilievska, co-founder and editor-in-chief of Umno.mk, said. 

From left to right: Branislava Lovre and Ilcho Cvetanoski (Photo: Kayla Goodson)

In a session on AI, ethics and trust, Branislava Lovre, co-founder of AImpactful, noted that newsrooms can increase audience trust by communicating about their uses of AI clearly and effectively. 

As CNTI’s global AI Research Working Group has written, there is not one rule book or exact labeling technique that newsrooms should follow. Instead, audiences, like journalists, are still getting used to AI. What matters most is that journalists and their organizations are transparent about their use of AI and carry out a dialogue with audiences about what that means.

It is not enough to simply have an AI policy or guideline, Erjon Curraj, a digital transformation expert from Albania, cautioned. Newsrooms must implement these policies consistently and ensure their staff is aware of them, too.

“This isn’t just about disclosure,” Lovre concluded. “This is our unique chance to lead by example. Social media networks are overwhelmed by AI-generated content, and if we don’t try to explain to our audience what is happening, we won’t be in a good place in one or two years.” 

Participants also discussed how policies and government regulations can impact media freedom and ethics. 

Ana Toskić from Partners Serbia and Emily Wright from CNTI highlighted the potential impacts of EU digital laws, especially the AI Act and the Digital Services Act, on journalism in the Western Balkans. They discussed how in closed or partially closed media environments, where journalists rely on social media platforms to share stories, laws like the Digital Services Act will have a significant impact, especially when regulatory bodies do not operate separately from the government and can misuse the laws to stifle independent reporting. 

The pair further explained that media organizations located within the EU, and those in EU candidate states, will have to adhere to the General Data Protection Regulation (GDPR) and the AI Act when using AI in their operations. 

Emily Wright (Photo: Kayla Goodson)

Gábor Kardos, CEO of the Hungarian publisher Magyar Jeti Zrt., emphasized the importance of staying on top of regulatory activity, especially in autocratic-leaning countries where independent media is the minority amongst state-controlled media. 

“Even if these regulations were created perfectly…. the state will always have the power to abuse them. And that’s the reason I’m advocating for better regulation,” Kardos said. “But we, as publishers, need to be aware that that’s not the thing that’s going to protect us. We as a community have to protect each other and ourselves, and be innovative and be faster than regulation can ever be.” 

Journalists Can Use AI to Help Address Challenges in the Information Environment 

In an era where press freedom is at risk, synthetic content floods our feeds and deepfakes grow more convincing by the day, journalists face an unprecedented challenge. Journalists are not just competing for attention anymore; they’re fighting for the very concept of verifiable truth.

The twist is that while AI can exacerbate some of the problems, it can also be part of the solution. Workshop participants recognized that AI tools could help newsrooms verify information faster and more thoroughly than ever before, allow them to reach new audiences and help them create new forms of content. The key lies in thinking ahead. 

“If we want to think strategically, we cannot focus only on the shortest term,” Kardos said. “It [the short term] does not matter if in five or seven years, it’s not journalism or AI, it’s humanity that will be in question. We need to focus on what happens in the midterm, within a few years.” 

Participants from several countries offered case studies of creative ways they have implemented AI into their workstreams, even on small teams with limited budgets that face pressure from their governments. 

The Center for Investigative Journalism of Serbia, which has a team of only 10 people, created a custom large language model (LLM) to analyze nearly 10 million pieces of data on wait times in Serbia’s healthcare system. The AI system allowed Ivana Milosavljević and her fellow reporters to analyze large amounts of data in record time. She noted, importantly, that a human reviewed all outputs to ensure accuracy, which was time-consuming, but she shared that AI enabled the team to reach and visualize conclusions in new and efficient ways. Initially tested on publicly available data, Milosavljević said the LLM system will be especially valuable for analyzing confidential data.

Ján Trangel, AI implementation lead at Ringier Slovakia, explained that the outlet has created an AI-driven hate speech moderation system after finding third-party tools ineffective for their language and regional needs. Ringier built custom hardware and an offline LLM — drawing on open-source options like Mistral and Google models — to evaluate messages in real time, classify their severity and support fully customizable moderation policies. The system processes more than 300,000 messages daily, reducing manual review while offering an admin panel for managing labels and decisions. Its offline architecture lets the team experiment freely, compare models and tailor features without depending on external cloud services. 

Vidi Vaka, a Skopje-based outlet with only three full-time reporters, created “KiberFlow,” an AI coworker that transforms the outlet’s reported stories into rap-style videos. KiberFlow uses character-animated performance and social satire to highlight everyday societal problems and has more than 1,000 followers on Instagram. The project is allowing Vidi Vaka to reach a younger audience that usually avoids traditional media in a new and engaging way. 

“For us, AI is not a shortcut; it’s a collaboration,” journalist Angela Petrovska said. “People follow KiberFlow not because it’s AI, but because it tells real stories made by good journalists.”

At Dennik N in Slovakia, Munk and her team explore AI uses on a smaller hiking website they manage. Every Thursday, the team publishes a set of recommended weekend hikes, using AI to assist with suggestions based on weather forecasts and difficulty levels. They have also developed tools for automated image cropping and social media posting. While not all types of tools are tested there, the hiking site provides an environment for experimenting, learning and identifying what might be useful before expanding these solutions to larger platforms at the different outlets in Denník N network.

Participants explained that AI tools are often easy to learn and improve creative flow, but they noted that human editing, cultural context and emotional nuance remain essential to keep the work authentic. Overall, they expressed optimism about the opportunities that AI tools can provide their newsrooms and left the workshop feeling motivated to see themselves as innovators in the news industry.

Conclusion

Participants of the event sitting around the conference table in Sarajevo (Photo: Kayla Goodson)

There’s something fitting about having this conversation in Sarajevo, a city that knows something about resilience in the face of existential challenges. The participants didn’t offer easy answers or technological determinism. Instead, they charted a middle path, one that takes AI seriously without surrendering the core values that make journalism essential.

The future won’t be built by those who reject AI wholesale or embrace it uncritically. It will be built by newsrooms that approach these tools with clear eyes, strong ethics and an unwavering commitment to serving their audiences with verified, trustworthy information.