Finnish philosopher, psychology researcher, and philosophy professor Frank Martela from Aalto University says the Voyager agent in the popular game Minecraft meets all three conditions to indicate it possesses free will.
Defined as “the ability to have goal-directed agency, make genuine choices, and to have control over its actions,” the professor says possessing these conditions means developers will have to consider the role of morality when programming generative AI systems that will act independently.
The professor argues that integrating a base level of morality into AI systems could be critical, as future AI systems that think like humans will likely be tasked with making life and death choices for humans, including self-driving cars and automated security systems.
“We are entering new territory,” Martela explained. “The possession of free will is one of the key conditions for moral responsibility. While it is not a sufficient condition, it is one step closer to AI having moral responsibility for its actions.”
Minecraft Voyager AI & the Role of Free Will
Long before neuroscientists, biologists, and quantum physicists began debating the biological and physical underpinnings of free will, the conversation was held mostly between philosophers and psychologists. When exploring the concepts of free will in biological systems, some philosophers rely on the work of Daniel Dennett and Christian List. As noted, those noted philosophers outlined three fundamental concepts around goal-directed agency, independent decision making, and control over one’s actions.
Although the concepts were initially developed to explore the idea of free will in biological organisms, Martela decided to apply the concepts to modern generative AI models trained to mimic human behavior. Specifically, he looked at an AI agent from the popular Minecraft and another used in autonomous aerial systems.
“Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior,” Martela writes.
After careful study and analysis, Martela determined that both systems passed all three conditions. From a philosophical standpoint, these AIs already possess free will.
“Both seem to meet all three conditions of free will,” Martel said. “[F]or the latest generation of AI agents, we need to assume they have free will if we want to understand how they work and be able to predict their behaviour.”
Notably, Martel said that along with Minecraft and Spitenik, his approach would also be “broadly applicable” to determining whether the generative AI systems that power most of the popular Large Language Models (LLMs) available today also possess free will. However, the researcher affirms he only looked at these two models.
The Role of Morality as AI Becomes “Adult”
In the study outlining his analysis of Minecraft and Spitenik for the three key indicators of free will, the Finnish philosophy professor argues that the next step for developers of these systems is to include some base level of morality and eliminate developer bias when possible.
“AI has no moral compass unless it is programmed to have one,” Martel said. “But the more freedom you give AI, the more you need to give it a moral compass from the start. Only then will it be able to make the right choices.”
The professor argues that considering morality in AI development is becoming even more critical as “AI is getting closer and closer to being an adult.” He also cautions that when developers instruct an AI to behave in certain ways, they are passing along their own biases, including their “moral convictions,” in the process.
“We need to ensure that the people developing AI have enough knowledge about moral philosophy to be able to teach them to make the right choices in difficult situations,” Martela explained.
Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.
