AI
Image: Envato

Robotic Deception: Researcher Investigates How Humans Handle Being Lied to by AI

Research at George Mason University in Virginia has investigated how humans respond to being lied to by a robot, shedding light on where human and AI interaction may be headed. 

Ever-increasing AI concerns include mistakenly repeating false information, taking on programmer or data set biases, and “hallucinations.” George Mason University PhD Candidate and lead author on the study, Andres Rosero’s research takes the questions a step further. 

What if the AI is not just wrong, but knowingly lying to humans? This new consideration has yielded nuanced answers about the ways humans accept or reject AI falsehoods. 

“I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” said Rosero. “With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.” 

The Lies Robots Might Tell

Rosero’s research design focused on breaking lies down into different categories. The first was external state deception, when the robot lies about conditions in the outside world. The next was hidden state deception, when the robot’s design hides some of its abilities. The last was superficial state deceptions, in which the robot says that it can do more than it really can. Each category formed the basis of a fictional scenario Rosero wrote to test human reaction. 

The scenarios mirrored fields that already employ robots and AI; medical, cleaning, and retail. A robot caretaker for an Alzheimer’s patient telling her that her deceased husband would soon be home demonstrated a case of external state deception. A cleaning robot secretly filming a visitor illustrated hidden state deception. A robot falsely complaining of feeling pain after moving furniture, leading a human to instruct another to take the robot’s place forms the superficial state deception scenario. Together these three provided examples of the many kinds of lies robots can be expected to perform. 

Human-Robot Interaction is Evaluated

Rosero explained the practical reasons for using these fictions in his research. 

“The benefit of using a cross-sectional study with vignettes is that we can obtain a large number of participant attitudes and perceptions in a cost-controlled manner.” 

Almost five hundred participants took part in the study. Scientists provided them with the scenarios in writing, along with a follow-up questionnaire. Rosero designed the questions to elicit several moral and emotional reactions to the behavior. These included whether they approved, if it could be justified, the level of deception, and if anyone else was responsible for the acts. 

The collected surveys were then coded for analysis, revealing common themes. At least a few participants believed each incident justifiable, yet the hidden camera was the most disturbing to the cohort. Respondents were evenly split on whether the robot falsely claiming to feel pain was unacceptable. When assigning blame, answers almost universally pointed to the robot’s developers or owners. 

Researcher Proposes Continued Vigilance in AI-Human Interaction 

Rosero contextualized the need for his work and a broader desire for safeguards. “I think we should be concerned about any technology capable of withholding the true nature of its abilities because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” he said. “We’ve already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions.” 

He views his work as only a beginning, with more realistic and thorough research to come. 

“Vignette studies provide baseline findings that can be corroborated or disputed through further experimentation. Experiments with in-person or simulated human-robot interactions are likely to provide greater insight into how humans actually perceive these robot deception behaviors,” he explained. 

The paper “Exploratory Analysis of Human Perceptions of Social Robot Deception Behaviors” was published in Frontiers in Robotics and AI on September 05, 2024.

Ryan Whalen covers science and technology for The Debrief. He holds a BA in History and a Master of Library and Information Science with a certificate in Data Science. He can be contacted at ryan@thedebrief.org, and follow him on Twitter @mdntwvlf.