artificial intelligence
(Credit: C. Macanaya/Unsplash)

Are AI Systems Truly Conscious? This Researcher Says Humanity May Never Know—and Explains Why That Matters

In the race to build ever-smarter machines, one philosopher is asking an uncomfortable question: What if we cannot know whether an artificial intelligence is conscious, and what if that uncertainty itself is the real danger?​

For decades, debates about “conscious AI” have split into two camps: optimists who think a sophisticated enough machine could one day have experiences like ours, and skeptics who insist consciousness is a strictly biological phenomenon.

In a new paper titled “Agnosticism About Artificial Consciousness,” Tom McClelland, a philosopher at the University of Cambridge, argues that both sides are overconfident. The only honest answer right now, he says, is that we probably won’t know any time soon.​

Many users of large language models like ChatGPT have been convinced that the machine they are speaking to seems like a real person. Recent stories detail users falling in love with their chatbots, and due to the human-like interactions they can provide, these AI tools are increasingly being viewed as a substitute for real human connections in the 21st century

McClelland’s central idea concerns the confusion many people feel when dealing with an LLM. What does it mean to be conscious, and can all those zeroes and ones ever actually achieve it?

Everything scientists currently understand about consciousness comes from studying biological creatures like humans, and to a lesser extent, animals like octopuses and monkeys. When we try to apply those findings to computer systems built from silicon chips instead of neurons, he argues, we hit what he calls an “epistemic wall.” That is, a point at which our knowledge runs out and we can’t go further with the evidence we currently have.​​ We ‘guess,’ rather than ‘know.’

McClelland insists that claims about AI consciousness should follow a principle he calls “evidentialism.” So, if you say an AI is or isn’t conscious, your claim should be grounded in solid scientific evidence, not vibes, sci‑fi stories, or metaphysical faith. And that, he says, is exactly where current discussion fails.

In humans, the science of consciousness relies on messy but workable tools such as brain scans, behavioural experiments, and models like Global Workspace Theory, which link specific kinds of information processing with awareness rather than unconscious processing. Those tools allow reasonably confident judgments, say, about whether a patient in a coma shows signs of awareness or whether an octopus is likely to feel pain.

But none of these tools explains the “why” at the heart of the so‑called hard problem of consciousness. 

“We do not have a deep explanation of consciousness,” McClelland explains in the paper. “There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological.”

Because we don’t understand the nuts and bolts behind consciousness, McClelland argues that confident ‘yes‑or‑no’ answers about future conscious-like AI systems are not scientifically responsible. In other words, we get lost in the “this thing is genuinely conscious” versus “this thing is a perfect non‑conscious mimic.”

At first glance, this might sound like a technical quarrel among philosophers in their ivory towers, but McClelland’s agnosticism has direct implications for the rest of us, because laws, policies, and social norms are already being written under the assumption that we will soon have tests for machine consciousness.

In the immediate future, large tech companies are already pumping out rhetoric concerning the stages of their AI tools and marketing the next leaps in AI development

“There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology,” he writes. “It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness.”  

In turn, McClelland is concerned that research grants and funding will be diverted to the study of AI consciousness, when in reality those funds could be used more effectively.

“A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI,” he explains.

Beyond the financial interests of tech firms and their investors, there are obvious social, cultural, and even personal implications that we have already seen manifest. 

If we wrongly assume that advanced AIs are not conscious when they are, we could be creating and exploiting beings capable of suffering. But if we wrongly assume they are conscious when they are not, we risk pouring care, legal rights, and empathy into systems that do not actually feel anything, potentially at the expense of humans and animals who do. And this is the philosophical rub.

McClelland says that both mistakes become more likely if we pretend to know more than we do. He points out that people are already treating chatbots as if they were conscious companions, with surveys finding that more than a third of people have felt a system “truly understood” their emotions or seemed conscious. AI companies, meanwhile, have strong incentives to play up that impression. Without a clear scientific basis for deciding who, if anyone, is really conscious, public belief and marketing could drift far from reality.

According to the paper, McClelland suggests shifting the ethical spotlight from consciousness in general to a narrower and more morally urgent notion: sentience.

In simple terms, sentience is the capacity for experiences that are good or bad for the subject. For humans, it’s our ability to feel pleasure or suffering. Many moral theories already treat sentience as what really matters ethically, whether in humans, animals, or potentially even in digital minds. McClelland argues that even if we remain agnostic about whether an AI is conscious at all, we can still ask a slightly different question: if this system were conscious, what kinds of experiences would it be having?

Instead of trying to build a “consciousness meter” for AI, researchers and regulators could focus on designing systems whose internal states, as far as we can tell, would not naturally correspond to pain, fear, or despair if they were conscious. 

This shift opens up a practical path that, if applied, could change how companies and governments talk about and design advanced AI. It would encourage more transparency about architectures, more interdisciplinary work on the science of sentience and emotion, and a cautious approach to systems that imitate human distress or self‑awareness for persuasive effect.

As AI companies continue to push ever farther and faster in their race to stay ahead and generate revenue, the question of whether the things they are building are “alive” becomes increasingly important. Equally, as AI systems grow more capable and more lifelike, the primary risk is not just whether they become conscious, but whether our beliefs about their minds—right or wrong—reshape how we treat each other, structure our laws, and allocate our morals.

By avoiding leaps of faith and remaining skeptical, McClelland argues, the race towards future AI could be slowed down, thereby allowing for better regulation and transparency. 

“If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism,” McClelland writes.

“We cannot, and may never, know.”

MJ Banias covers space, security, and technology with The Debrief. You can email him at mj@thedebrief.org or follow him on Twitter @mjbanias.