A recent study sheds light on the intricate way people view the morality of decisions made by artificial intelligence (AI), revealing that the ethical perception of AI can vary depending on the situation.
The research, published in Behavioral Sciences, found that study participants typically viewed AI decisions as more immoral and deserving of blame, even when those same decisions were made by a human. However, participants’ moral judgments of AI and human decisions were the same when examining the ethics of action vs. inaction.
The results suggest people appear more concerned with the appropriateness of allowing artificial intelligence to make consequential decisions, as opposed to the morality of the decisions being made.
AI has increasingly become a significant influence in numerous professional sectors, such as healthcare, where AI systems are already being used to help diagnose illnesses and monitor medical conditions.
Consequently, debates have ensued regarding the ethical implications of allowing AI in decision-making processes that can significantly impact people’s property, health, and overall lives.
A recent installment of The Debrief’s Intelligence Brief newsletter reported on computer scientist and cognitive psychologist Geoffrey Hinton, known by many as the “Godfather of artificial intelligence,” expressing concerns over the alarming rate at which AI is progressing.
“They still can’t match us, but they’re getting close,” Hinton said at the Collision Technology Conference hosted in Toronto, Canada, June 26-29.
These concerns are particularly relevant when it comes to integrating artificial intelligence in military and defense systems, including the ability of AI to make decisions in employing lethal force.
Earlier this year, The Debrief reported that the U.S. Department of Defense had quietly launched a new program that could unleash thousands of autonomous land, sea, and air drones to overwhelm and dominate an enemy’s area defenses.
Using the moniker the “Autonomous Multi-Domain Adaptive Swarms-of-Swarms” program, or “AMASS,” the Defense Advanced Research Projects Agency (DARPA) says the program does not intend to develop autonomous drones that can independently execute lethal missions.
“The central technical challenge of this program is to design human-in-the-loop planning and establish criteria to bind the autonomous operations. This includes the establishment of geofences for allowed operations, required confidence levels and permissions before taking action, and automated mission termination,” a DARPA spokesperson told The Debrief.
The debate over AI is compounded by the fact that many do not view AI as technological tools employed by humans but as autonomous entities that should be held accountable for their actions.
Some of the perceptions of AI as a sentient being and accompanying concerns have been fostered by popular science fiction films like The Terminator, iRobot, or The Matrix, which depict artificially intelligent machines rising up against their human progenitors.
People’s moral judgments concerning AI actions lie at the crux of this conversation.
Evaluating when moral norms are violated, or assessing if something is good or bad, right or wrong, permissible or impermissible, is a fundamental aspect of human behavior.
In many fictional portrayals, machine intelligence’s cold, logical, and dispassionate nature renders it incapable of understanding the nuance of human emotion and, thus, incapable of making morally sound decisions. For example, in the 2017 film Singularity, a supercomputer invented to end all wars recognizes that humankind is responsible for all wars and, therefore, to truly eradicate mass violence, it must kill off humanity.
Scientific research on whether people perceive AI as independent agents or tools remains inconclusive.
Some studies suggest that people are more critical when evaluating moral judgments made by AI systems, while others indicate that they are more lenient than judgments made by humans.
To delve deeper into the topic, a team of Chinese researchers led by Dr. Yuyan Zhang conducted a study examining how people form moral judgments about the behavior of AI agents in different moral dilemmas.
A moral dilemma arises when an individual faces conflicting ethical choices, which makes determining the morally correct course of action challenging. When facing such dilemmas, people typically make one of two choices: a utilitarian or deontological choice.
In a utilitarian choice, a person’s decision is based on the expected consequences. Conversely, with a deontological choice, a person determines what is right based on accepted moral norms and rules, irrespective of the consequences.
The researchers organized three experiments to explore how people viewed AI decisions from a utilitarian or deontological perspective.
The first experiment aimed to determine whether people apply the same moral norms and judgments to human and AI agents in moral dilemmas primarily driven by cognitive processes. The study included 195 undergraduate students, predominantly female, with an average age of 19.
Each participant read about a trolley dilemma scenario featuring either a human or an AI agent. The trolley dilemma required the character to choose between taking no action and allowing five people to die when a speeding trolley runs over them or taking action by redirecting the trolley to a second track where it would kill only one person.
Participants were randomly assigned to four groups, each reading about a specific type of agent (human or AI) making a particular choice (inaction or taking action). Participants then rated the morality, permissibility, wrongness, and blameworthiness of the agent’s behavior.
The second experiment followed a similar design but presented a slightly different moral dilemma, known as the footbridge dilemma.
The footbridge dilemma mirrored the trolley dilemma, with the agent deciding to act or refrain from acting. In this scenario, a speeding trolley was heading toward five people, and the agent could either allow the tragedy to occur or push a person off a bridge in front of the trolley. By pushing the individual off the bridge, the agent could save the other five people further down the track.
The third experiment aimed to ensure that the results were not solely influenced by differences in the moral views of the participants in the previous studies.
Like the previous experiments, each scenario had four variants (human vs. AI, action vs. inaction), and the researchers randomly assigned the scenario a participant would read.
The results of the first experiment indicated that participants judged the decision of the human agent as more moral than the decision of the AI agent, regardless of the choice made. Participants also rated the AI agent as receiving more blame for their decision. On average, action and inaction were rated equally moral and deserving of blame.
In the second experiment, participants rated the decision to intentionally push a person off a bridge (to save five other people) as less moral than the decision to do nothing, regardless of whether it was made by a human or an AI agent.
In the third experiment, participants rated the action in the footbridge dilemma as less moral and more wrong than the action in the trolley dilemma. However, participants considered inaction in both dilemmas equally moral or wrong. This was regardless of whether the decision-making agent was a human or an AI.
Nevertheless, participants in the third experiment judged the actions of the AI in the trolley dilemma as less permissible and more deserving of blame than a human, whereas there was no difference in the footbridge dilemma.
“In the trolley dilemma, the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior, regardless of whether they act utilitarianly or deontologically,” researchers wrote. “Conversely, in the footbridge dilemma, the actual action rather than the agent type influenced people’s moral judgments.”
Researchers said the differences in how AI decisions were viewed could result from participants being driven to make moral judgments by differing cognitive processes when assessing the two dilemmas.
“Controlled cognitive processes frequently come into play when responding to dilemmas such as the trolley dilemma, whereas automatic emotional responses are more prevalent when responding to dilemmas such as the footbridge dilemma,” researchers wrote.
“Thus, in the trolley dilemma, controlled cognitive processes may steer people’s attention towards the agent type, leading them to judge it as inappropriate for AI agents to make moral decisions. In the footbridge dilemma, pushing someone off a footbridge may evoke stronger negative emotions than operating a switch in the trolley dilemma. Driven by these automatic negative emotional responses, people would focus more on whether the agents did this harmful act and judged this harmful act less acceptable and more morally wrong.”
However, researchers note that the study aimed to examine how people make moral judgments about humans and AI agents, and the underlying psychological mechanisms behind participants’ views were not investigated, leaving room for further research.
The recent study demonstrates the complex dynamics in human perceptions of AI decision-making. Perhaps unsurprisingly, the results suggest that people are primarily concerned with granting AI the ability to make consequential decisions but ultimately view inaction as more distasteful regardless of who makes the choice.
As AI continues to evolve and play a more significant role in various aspects of society, it becomes increasingly crucial to understand these intricate relationships and address the ethical implications of machine intelligence.
Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan. Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com