AI bias
(Unsplash)

Researchers Reveal Startling Psychology Behind How AI Biases Affect Humans

AI systems can magnify even the smallest biases, increasing biased beliefs in humans, according to new research published in Nature Human Behavior.

In the past, a wealth of psychological and sociological research has investigated how humans influence one another. More recently, researchers have taken to uncovering bias in AI models.

Now, UCL researchers have moved on to the next phase—how AI systems create a feedback loop with behaviors notably distinct from human-to-human influence. Significantly, the researchers found many examples of real-world consequences arising from biased AI interactions.

“People are inherently biased, so when we train AI systems on sets of data that have been produced by people, the AI algorithms learn the human biases that are embedded in the data. AI then tends to exploit and amplify these biases to improve its prediction accuracy,” co-lead author Professor Tali Sharot of UCL Psychology & Language Sciences said.

“Here, we’ve found that people interacting with biased AI systems can then become even more biased themselves, creating a potential snowball effect wherein minute biases in original datasets become amplified by the AI, which increases the biases of the person using the AI,” Sharot added.

Researching The AI Bias Feedback Loop

The researchers recruited over 1,200 subjects to participate in multiple studies. The team’s first study involved creating a set of digitally rendered faces conveying happiness, sadness, and stages in between. Participants were then asked to judge whether each face appeared happy or sad, and afterwards used those participant responses to train an AI algorithm.

According to previous research, humans have an innate tendency to make negative judgments when provided with ambiguous information. The AI picked up and amplified the participant’s marginal tendency to judge faces as sad over happy. When a second group completed the task, they were made aware of the AI judgments and allowed to change their answer after seeing what the AI decided. Many participants changed their answers to negative after seeing the AI response and became increasingly likely to make an original negative judgment after interacting with the AI.

In some experiments, the team employed the popular Stable Diffusion generative AI. When asked to produce photos of financial managers, the AI tended to create a higher proportion of white males than is statistically accurate. Again, the AI bias trickled down to humans. Researchers showed one participant group headshots of individuals and asked the subjects to determine which person was a financial manager. After being shown the AI results, subjects were likelier to indicate a white male than they had previously been.

The Feedback Loop Continues

The team found their results could be replicated across a variety of different tasks. Some of these were innocuous, like deciding which direction dots traveled on a screen. Others may have more significant real-world implications, such as overestimating men’s performance and underestimating women’s performance.

Participants mostly claimed to be unaware of how much AI influenced their decision-making. Notably, when the UCL team told subjects another person made the decisions instead of an AI, participants were less likely to be influenced. The researchers believe this indicates a general trend where humans have come to expect AI to be more accurate than people.

“Not only do biased people contribute to biased AIs, but biased AI systems can alter people’s own beliefs so that people using AI tools can end up becoming more biased in domains ranging from social judgements to basic perception,” said co-lead author Dr. Moshe Glickman, also with UCL Psychology & Language Sciences.

“Importantly, however, we also found that interacting with accurate AIs can improve people’s judgements, so it’s vital that AI systems are refined to be as unbiased and as accurate as possible,” Glickman added.

The team’s new paper, “How Human–AI Feedback Loops Alter Guman Perceptual, Emotional and Social Judgements,” appeared in Nature Human Behavior on December 18, 2024.

Ryan Whalen covers science and technology for The Debrief. He holds an MA in History and a Master of Library and Information Science with a certificate in Data Science. He can be contacted at ryan@thedebrief.org, and follow him on Twitter @mdntwvlf.