Where Information Comes From

Few interviewees have deep knowledge about how either journalism or AI chatbots work. At the same time, interviewees expressed generally positive attitudes towards AI chatbots alongside generally negative ones towards news media.


TL;DR

ℹ️ Most interviewees rely on at least a few news outlets in addition to AI chatbots, but almost none expressed an understanding of journalistic methods.

ℹ️ They lack clear vocabulary to describe how AI chatbots process information and generate language.

ℹ️ They expressed generally negative attitudes towards news media products and generally positive ones towards chatbots.

ℹ️ The presence of cited and linked sources as an assurance of accuracy in AI chatbot outputs.

ℹ️ Two distinct factors tend to trigger verification: either the outputs contradict users’ assumptions or the stakes are high.

ℹ️ When they do verify outputs, there is no consensus about the best way to do so.

ℹ️ Past experiences of inaccurate or outdated information did not deter them from future use.

ℹ️ Interviewees are divided between those who worry about bias in AI chatbots and those who see AI chatbots as less biased than other sources of information — but neither bring deep knowledge into their opinions.

Explore Full Report

Journalists and AI chatbots generate content through fundamentally different processes. Given that CNTI’s interviewees incorporate both into their news repertoires, we found it important to understand how they think about both processes as well as whether and how they verify the information.

How We Did This

Using the Respondent research platform, CNTI recruited adults who said they (1) use AI chatbots at least once a week and (2) “keep informed about issues and events of the day” at least somewhat closely. To learn about the breadth of use cases and opinions, we sought to maximize variation across demographics. See topline for details.

Our interview protocol incorporated a concurrent thinkaloud approach. After a series of questions about general news and information habits, we asked interviewees to share their screen while demonstrating how they use one or more AI chatbots of their choice. We also asked them to walk us through AI chatbot interactions from their history and their use of other platforms and tools, including news aggregators, social media and news sites. These methods provide richness and depth; however, it’s not possible to generalize about the frequency of behaviors from these interactions, so we have refrained from using quantitative terms throughout this report.

CNTI’s analysis focused on reasons for seeking information identified by audience practitioners, the experience of interacting with AI chatbots and interviewees’ broader understanding of information.

The AI chat window terminal is a deeply personal space and the researchers in this project are incredibly grateful to interviewees who opened this safe space to them.

As with all CNTI research, this report was prepared by the research and professional staff of CNTI. This project was financially supported by CNTI’s funders.

See “About this study” for more details.

🇺🇲🇮🇳 Most interviewees rely on at least a few news outlets in addition to AI chatbots, but almost none expressed an understanding of journalistic methods. 

In the U.S., most interviewees have a strong sense that certain news sites count as credible. However, we saw no clear consensus among interviewees on which sites those are. (This is broadly consistent with other research on the polarization of media habits in the U.S.). In India, interviewees were uniformly negative about television news, while they displayed more affinity for international publications (e.g., Al Jazeera, BBC) and the leading English-language daily Indian newspapers (e.g., The Times of India, The Indian Express and The Hindu). Interviewees in both countries also prioritize different sources for different topics. For example, an interviewee might prefer a legal website over news sources for topics addressing current law. When asked how one determines credibility, almost no interviewee articulated an answer beyond a vague sense that some outlets have a political slant and must be used with caution — if not avoided altogether. The general lack of awareness among the interviewees about the process of journalism is consistent with findings from CNTI’s earlier focus groups and survey research.

🇺🇲🇮🇳 Interviewees lack clear vocabulary to describe how AI chatbots process information and generate language, so they default to using language that describes human processes like “thinking” and “reading.”

Many of the interviewees talked about what AI chatbots “know” or “understand” or “think,” which isn’t an accurate way to describe how they arrive at answers. In some cases, it was clear that interviewees used this language simply as a mental shortcut. In other cases, it seemed to further muddle misconceptions that interviewees held. Technically speaking, AI chatbots provide answers based on patterns and algorithms — not based on reasoning. (Relatedly, there is a robust debate in the technical community about whether the term “hallucination” is appropriate, since it presupposes that large language models have minds. Furthermore, technical solutions have proved elusive; in fact, false statements have been found to be “mathematically inevitable” with current technology.) 

Because interviewees analogize from human cognition, they often assume that AI chatbots have “read” and “understood” the links and sources they reference. As a result, our interviewees largely assume that responses accurately reflect the linked sources. A few interviewees in the U.S. specifically expressed a desire to see more documentation and training from developers about how AI chatbots work and how to prompt them for the best results. 

🇺🇲🇮🇳 While they lack deep knowledge about the underlying process for both journalism and AI chatbot content, interviewees expressed generally negative attitudes towards news media products and generally positive ones towards chatbots.

Beyond the specific sites and sources they themselves prefer, many interviewees expressed a broadly negative view of the news media. Overall, interviewees in India are skeptical of bias, commercial interests and sensationalism. Interviewees in the U.S. raise similar concerns along broadly partisan lines.

In contrast, these same interviewees are forgiving of and persistent with AI chatbots when given a wrong answer. Taking a collaborative stance, interviewees gently chide the AI chatbot to modify, clarify or correct the output. The interactivity seems to allow for second chances, while the interviewees have no such patience for fixed text.

In this context, interviewees can see AI chatbots as scaffolds for personal judgment. They use them to map the information environment, an approach that preserves decision-making power and independence of thought, allowing them to assert control over what they know and believe, positioning themselves as the final arbiters of truth.

🇺🇲🇮🇳 Interviewees tend to take the presence of cited and linked sources as an assurance of accuracy in AI chatbot outputs, and do not feel the need to click through them.

For most interviewees in both countries, AI chatbots showing sources is considered proof of accuracy, but few actually check the sources every time. Instead, many interviewees take the mere presence of sources as a guarantee of quality. Typically, they open up the list of sources or hover over links to see the sites, and assess the credibility of the generated text on the basis of those links. Most interviewees assume that if sources were linked or cited, the text would reflect them accurately. As one interviewee in the U.S. put it, the fact that “you can always double check if you want to” meant there was no need to check. Similarly, many Indian interviewees view AI chatbots as neutral aggregators. One described AI chatbots as “nothing but a library” that picks up information stored by humans. Others said they believe AI chatbots collect data from “sources like Google and YouTube” to provide a complete picture. One person explicitly used Perplexity to avoid “one-sided opinions” from officials or news channels, believing the AI chatbot’s aggregation of multiple sources constituted a “neutral” truth. These examples amply show a tendency toward automation bias among the Indian interviewees.

Complicating the story

Just one person, an interviewee in the U.S., expressed curiosity or concern about source weighting: “Okay, but what percentage of the output are you deriving from each of these sources? […] And it’s like, all right, if you’re weighting it 70% Fox News versus CNN, maybe next time just try to keep it 50/50, right? So that way, I’m not inherently getting information from one source versus another.”

🇺🇲🇮🇳 When interviewees do put in the work to verify AI chatbot outputs, it tends to be for one of two distinct reasons: either the outputs contradict their assumptions or the stakes are high.

We saw two very different reasons interviewees put in the work to verify the outputs they receive.

First, confirmation bias is playing a role in what interviewees verify. When AI chatbot outputs “feel” correct, interviewees do less work to check them. Many interviewees said they only make an effort to verify information if their gut instinct suggests it was off in some way. When something conflicts with prior knowledge or seems biased, they are more inclined to put in the effort. For example, one Indian interviewee checked if ChatGPT’s answers “match [their] thinking” on gold trends.

Second, interviewees put in more effort to verify information that has bigger consequences if inaccurate. When looking into legal procedures or specific legal rights, for example, we saw interviewees confirm the information the AI chatbots provided with official sources like the government or law firms. In fact, one U.S. interviewee said outright that they only rely fully on AI chatbots for things they don’t care that much about; the rest of the time, they have at least some background information they can use to judge the response.

🇺🇲🇮🇳 When interviewees do want to verify AI chatbot outputs, there is no consensus about the best way to do so.

The most common strategy for verifying AI chatbot outputs among interviewees in both countries seems to be comparing the output of two different AI chatbots. Interviewees also compare AI chatbot outputs with search engines, social media and trusted individuals. A few interviewees diligently follow links to see if they match what the AI chatbot says, but not many. Another strategy used by interviewees is instructing an AI chatbot to limit its sourcing and only use “verified” or “evidence-based” references, or provide “proof-based answers.” This strategy still assumes that the output is consistent with the linked source material. Another version of this strategy involves interviewees asking AI chatbots to recommend good sources and then turning directly to those. Interviewees in both countries have also developed idiosyncratic auditing strategies, where they put AI chatbots through a series of tests before deciding whether to use them.

🇺🇲🇮🇳 A number of interviewees recalled getting inaccurate or outdated information in the past, but it did not deter them from future use.

While none of the interviewees verify information systematically, a considerable subset of interviewees mentioned that they had gotten, at one time or another, inaccurate or unhelpful answers from AI chatbots. Well over half of the interviewees said they had received inaccurate information at least once, but few interviewees could describe a specific instance.

A big concern among interviewees in both countries is that AI chatbots may rely on outdated or partial information. We saw several instances of interviewees navigating this issue in real time. One interviewee asked about scheduled Big 10 football games and said that they had received last year’s schedule just a few days previously. Another interviewee noticed that all of the linked articles in an AI chatbot output had dates in early 2024, and a third was frustrated that linked articles in a quickly developing story were a month old. In each case, the interviewee noticed the missing information because they were already well informed on the topic, and then responded with a more specific prompt.

🇺🇲🇮🇳 In the search for unbiased information, interviewees are divided between those who worry about bias in AI chatbots and those who see AI chatbots as less biased than other sources of information — but neither bring deep knowledge into their opinions.

In the search for unbiased information, interviewees are divided between those who worry about bias in AI chatbots and those who see AI chatbots as less biased than other sources of information.

This is an area where the lack of transparency and clarity about how AI chatbots work comes into play. At least one interviewee described AI chatbots as a “black box” and raised concerns that AI chatbots might covertly promote specific products without disclosing a financial interest. Others assumed they understood how they work but made claims that can’t be fully verified. Many Indian interviewees said they used AI chatbots to escape the “bias” of mainstream media, without fully acknowledging that the models underlying these chatbots are trained on content drawn largely from that very media. Two interviewees in the U.S. expressed similar perspectives, one saying that AI chatbots “don’t have an opinion” and thus cannot provide biased information, disregarding the potential for biases in training data or outputs. Some interviewees assume that because the AI chatbot linked multiple sources with different viewpoints, the generated results must represent a “neutral truth.” In trying to explain their judgments about bias, several interviewees reached the limits of their information. “Where does it get the information?” one wondered aloud. “I don’t know…” There’s no question that a synthesis across political standpoints would be valuable. What is difficult — perhaps even impossible — is determining whether AI chatbots can actually provide one.