news AI
(Unsplash)

Readers More Likely to Distrust News Reporting Aided by Artificial Intelligence, New Study Finds

Readers are less likely to trust news writing aided by artificial intelligence (AI), according to new findings from the University of Kansas.

The findings reveal how knowledge of AI’s role in producing reporting increases people’s distrust and points to a need for greater transparency among news agencies regarding how AI is used in their reporting. 

“The growing concentration of AI in journalism is a question we know journalists and educators are talking about, but we were interested in how readers perceive it,” said Alyssa Appelman, associate professor in the William Allen White School of Journalism and Mass Communications and co-author of two studies on the topic, in a statement. 

“So we wanted to know more about media byline perceptions and their influence, or what people think about news generated by AI,” she said. 

Appelman and co-researcher Steve Bien-Aimé conducted the research by creating an experience where they showed readers a news article about the artificial sweetener aspartame and its safety for human consumption. Each participant was provided one of five bylines, written by either a sole staff writer, a writer who received help from an AI tool, a writer using AI assistance, a collaboration between a staff writer and AI, or an article penned entirely by artificial intelligence. 

Divided into a pair of research papers, the team’s findings focused on AI bylines in one of the papers, while the other examined the way readers’ perceptions of humanness helped them mediate between perceptions of AI contribution and overall judgments on credibility. 

Participants were surveyed about the bylines they received and whether they agreed with various statements designed to assess their media literacy and attitudes toward AI. The results, detailed in the team’s first paper, showed that the participants had a broad understanding and view of AI and how it worked. A large test group reported feeling humans were the primary contributors. In contrast, a few others said they thought AI helped with the first draft, research assistance, and editing by a human. 

Overall, participants had a good understanding of AI technology, how it works, and human collaboration. Alternatively, the multiple byline conditions left readers ample room to interpret how they specifically influenced the article. When AI credit was mentioned in the byline, the researchers pointed out that it negatively affected participants’ perceptions of the source, author, and credibility.  Even if the byline was said to be “written by staff writer,” the perception was that it was partially written by AI, as there was no human name accredited to it.

Without a human connection, something as simple as adding a name could change their perception. This outcome can have an adverse effect on the outlet, the source of material, and overall credibility. 

“Humanness contains intelligence and traits such as agency, empathy, and fairness. Readers expect journalists to be guided by the facts to put issues into the correct and fullest context. This is important because an old industry axiom says journalists, at their core, are ‘human beings telling stories about other human beings.’ Thus, understanding the complexities of the human experience is essential to quality journalism,” tells Steve Bien-Aime in an email to The Debrief.

The results showed that their opinions of the news’ credibility were negatively affected regardless of what they thought AI contributed to the story.

One of the techniques participants used to understand if a story was composed by AI or humans involved sensemaking. This approach involves using previously learned information to interpret unfamiliar situations.

The second paper’s main focus was perception. The perception of humanness influenced the link between perceived AI involvement and credibility judgments. The findings suggested that the reader’s perception of transparency increased when AI was accredited and acknowledged for being used. Overall, having a human contribute and report was perceived as more trustworthy. 

“The big thing was not between whether it was AI or human: It was how much work they thought the human did. This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not,” states Bien-Aimé. 

Participants understood AI involvement in the article, regardless of the byline. Higher AI involvement led to lower credibility judgments, even among those who saw “written by staff writer.”

The overall findings of both papers suggest that people view human contributions in traditionally human fields like journalism as more credible, while AI involvement can diminish credibility. But what does the data suggest about readers’ baseline understanding of AI in journalism?

“The data suggest readers have a working knowledge of AI providing content after responding to some type of prompt it received from a human,” Bien-Aimé said. 

Overall, credibility in news reporting remains fundamentally linked to perceptions of a human component, which contrasts with our learning and growing AI systems. However, news outlets and editors should be mindful of transparency involving how AI is used, which the team’s recent findings indicate may help build better trust with their audience. 

“Right now, AI is not normalized in journalism, though it’s been utilized for about a decade in various ways. The big question is determining how much the public wants AI involved in news production,” Bien-Aimé said. 

Chrissy Newton is a PR professional and founder of VOCAB Communications. She currently appears on The Discovery Channel and Max and hosts the Rebelliously Curious podcast, which can be found on The Debrief’s YouTube Channel on all audio podcast streaming platforms. Follow her on X: @ChrissyNewton and at chrissynewton.com.