About This Study

How and why CNTI undertook this project


Explore Full Report

Interviewees use AI chatbots to act on what’s happening and to understand it, more than simply to know about it or to feel something about it

Interviewees see AI chatbots as fast, easy, personalized, customizable and friendly ways of getting information

Few interviewees have deep knowledge about how either journalism or AI chatbots work. At the same time, interviewees expressed generally positive attitudes towards AI chatbots alongside generally negative ones towards news media

Four interviewees illustrate a broad range of behaviors and attitudes

AI chatbots — such as ChatGPT, Grok, Amazon’s Alexa, Google AI Mode or the Washington Post’s “Ask the Post AI” — are software products which simulate human-like conversations and generate responses across a wide range of topics.

While even their creators caution against using them as arbiters of fact, research consistently demonstrates that people increasingly rely on them for information about the world. The line between “information” and “news” is hardly clear-cut. Information-seeking is also likely to include many topics where AI chatbot users might previously have turned to news sources.

We set out to learn from relatively early adopters in two countries (the U.S. and India) why and how they have incorporated AI chatbots into their information routines, and what this might mean for news providers.

Why we chose these two countries

Using the Respondent.io platform, the CNTI team recruited 27 adults in the U.S. and 26 in India. All of them participated in an hour-long virtual interview where they shared their screen and talked through their AI chatbot use and broader information habits.

Recruitment

To ensure that we were only speaking with regular users of AI chatbots, we excluded people who said they use AI chatbots less frequently than once a week.

To ensure that we were only speaking with people who pay attention to news information in the broadest sense, we excluded people who said they do not “keep informed about issues and events of the day” at least “somewhat closely.” In practice, most interviewees turn at least somewhat regularly to legacy news media sources, often via news aggregators and social media.

Beyond that, we took a maximum variation approach to a number of demographic variables within each country: age, gender, race and ethnicity, household income, educational attainment, location within the country and political ideology.

After we had completed about 18 interviews for each country, we conducted additional targeted recruitment to capture groups that were not well represented in our initial pool, specifically adults age 55 and older (both countries) and adults without a bachelor’s degree (U.S. only).

The full recruitment questionnaire and topline demographics of our interviewees are available here.

Thinkaloud protocol 

Concurrent thinkalouds ask people to narrate their behavior and answer questions about it in real-time. They are particularly valuable for understanding topics like information habits where self-reported data are unreliable. The goal of this study is to observe the needs and desires of the early adopters using AI chatbots to receive news.

Our research protocol included three semi-structured modules. During the first 10-15 minutes of the conversation, we asked broad questions about interviewees’ news repertoires and routines. We then turned to a screenshare and asked them to walk us through an AI chatbot query. For the last third of the interview, we asked interviewees to walk us through past AI chatbot queries, compare between the AI chatbots that they regularly use and provide more details about the other sources they turn to.

Researcher positionality

The researchers who designed the protocols, collected the data and analyzed the data are based in both the United States and India. CNTI’s small team cannot fully represent the diversity and breadth of these countries. In addition to relatively high educational status, our team as a whole has a high affinity for and knowledge about journalism. These attitudes may have colored our interactions with the interviewees.

The researcher who conducted all Indian interviews speaks both English and Hindi fluently and used both languages in interviews; all U.S. interviews were conducted in English.

The research team created transcripts of interview audio, adding copious screenshots to clarify references in the text. After anonymization, transcripts were imported into Dedoose qualitative analysis software.

Two researchers conducted the bulk of the coding process, working separately on U.S. and India interviews, with a third researcher contributing to coding U.S. interviews. The codes were developed iteratively, with some codes informed by existing frameworks and others emergent in the data.

Transcription process 

Interviews were conducted with Google Meet, and Google Gemini produced a first transcript. Researchers reviewed each transcript for major errors prior to coding, and all quotations that appear in this report were reviewed by a researcher before publication.

Coding and follow-up analysis

The CNTI team’s initial coding schema focused on three different areas:

  • Informational needs and how people meet those needs
  • Interactions with AI chatbots
  • Verification practices and concerns

The three areas correspond roughly to the three chapters of this report. We coded screenshots, text and anything else that fell within the thematic areas.

After all documents were coded, we reviewed all excerpts with the same code within each country to inductively identify further themes and patterns within each larger category. The researchers were in constant discussion about similarities and differences, and reviewed each chapter repeatedly.

This type of qualitative approach surfaces the breadth of experiences, but not the frequency of those experiences. In general, any theme that was shared by at least three interviewees in a single country appears in the report.

Informational needs 

The codes that fell under this category were informed by a set of reasons for seeking information identified by audience practitioners, also known as the “news user needs” framework. This framework categorizes the reasons that people access news content into four basic needs:

  • People need to act and decide, and they need help.
  • People need to know what’s going on.
  • They need to understand what’s going on.
  • People need stories that make them feel something, whether that’s outrage or joy.

Interactions with chatbots

Two initial codes fell into this category.

The first was a broad sequencing code that we used to identify both:

  • extended interaction sequences with the AI chatbot, including repeated promptings or accepting suggested prompts and
  • toggling between chatbots and other elements of interviewees’ news repertoires.

The second was for anthropomorphization, and included all actions or discussions where interviewees treated the AI chatbot as a human participant in interaction. This might include characterizations (e.g. “oh it’s my best friend”) as well as prompting strategies that treat it as human (e.g. including “please,” calling it “stupid” in a prompt).

Verification practices and concerns

We were interested in whether interviewees are concerned about the accuracy of the information they receive and how they approach verifying it.

Concerns we coded for included:

  • Privacy and data protection
  • Bias in both the AI chatbots and other news sources
  • Delayed or outdated information being provided
  • Broad questions about the information supply chain and how AI chatbots and other news sources verify (or don’t verify) their information

Verification practices and strategies we coded for (all AI chatbot-specific) included:

  • Checking the list of sources and assuming those sources were reflected accurately
  • Clicking through to external links and ensuring that those sources were reflected accurately
  • Comparing the responses between two different AI chatbots
  • Auditing AI chatbot responses by asking them known questions to assess performance before relying on them

We also included several other pathways. 

The AI chat window terminal is a deeply personal space and the researchers in this project are incredibly grateful to interviewees who opened this safe space to them. CNTI’s audio and video recordings and follow-up messages from interviewees contain a great deal of personal information.

All identifying information (including consent forms and video recordings of interviews) were saved on a password-protected, encrypted cloud drive that is only authorized to the core research team at CNTI. All interviews were conducted using our team’s videoconferencing software. Google Gemini was used to create first-draft transcripts; our team uses a workspace account that does not share data or use it for training purposes. Interviewees could opt out of automated transcription, although none did.

Moreover, transcripts and screenshots were anonymized to the extent possible before export for analysis in Dedoose. Information like names, specific locations and employment details were redacted, as were photos of individual faces. We present demographic information only in the aggregate (see topline) to prevent anyone from identifying individuals who participated in this research.

Ethical review

Research plans were reviewed and approved by TERC Institutional Review Board.

Consent forms were available in both English and Hindi, although no interviewees used the Hindi version of the form.

Continue reading