brain

What the Human Brain Sees That AI Can’t: New Study Reveals Our Unique Edge in Navigating the World

Most of us have probably never paused to consider the awe-inspiring complexity of our brain’s ability to effortlessly decide whether to walk, climb, or swim when encountering a new environment. In an instant, without conscious thought, our brains size up a scene and tell us how we can move through it, a feat that even the most advanced AI systems struggle to replicate. 

However,  according to new research, this remarkable ability we take for granted daily relies on specialized neural processes that artificial intelligence has yet to replicate despite its rapid advances.

Researchers from the University of Amsterdam have uncovered how the human brain encodes so-called “locomotive action affordances”—the opportunities for movement that our surroundings present. 

Published in PNAS Neuroscience, their work provides fresh insights into how our brains instantly recognize whether we can walk through a field, climb over rocks, or dive into water, all at a glance. 

Perhaps most strikingly, the study highlights how deep neural networks (DNNs)—AI systems inspired by biological brains—fail to replicate this fundamental aspect of human perception.

The team’s findings point to a profound difference between natural and artificial intelligence, with implications for everything from robotics to self-driving cars. While AI systems have made enormous strides in recognizing objects and classifying scenes, they still fail to understand what those scenes afford regarding physical action.

“Even the best AI models don’t give exactly the same answers as humans, even though it’s such a simple task for us,” study co-author and computational neuroscientist Dr. Iris Groen explained in a press release. “This shows that our way of seeing is deeply intertwined with how we interact with the world.”

“We connect our perception to our experience in a physical world. AI models can’t do that because they only exist in a computer.”

The researchers combined brain imaging, behavioral studies, and machine learning analysis to explore this capability. Volunteers in the study were shown images of indoor and outdoor environments, and their brain activity was monitored using functional MRI (fMRI). 

brain
Sample of images shown during experiments. (Image Source: University of Amsterdam, Bartnik, et al.)

The goal was to map how the brain represents different locomotive possibilities, such as walking, climbing, swimming, crawling, jumping, or flying.

“We wanted to know: when you look at a scene, do you mainly see what is there – such as objects or colors – or do you also automatically see what you can do with it,”  Dr. Groen explained. “Psychologists call the latter ‘affordances’ – opportunities for action; imagine a staircase that you can climb or an open field that you can run through.”

The data revealed that specific regions of the human visual cortex—especially those areas responsible for processing scenes—light up in patterns that directly encode the types of movement possible in a given environment

Intriguingly, these patterns were distinct from those activated by other properties, such as recognizing objects, surfaces, or general scene categories. This suggests that our brain carves out a separate space to understand how we can act within our surroundings.

In parallel, the researchers tested a variety of deep neural networks trained on everyday visual tasks like object recognition or scene classification. These AIs could identify whether an image depicted a kitchen, a forest, or a city street—but they didn’t do well at perceiving what movements those environments allowed. 

Results showed that the alignment between AI activations and human brain patterns was weak in locomotive affordances, even though it was strong for object-related tasks.

The team then tried fine-tuning these AI models, training them to classify images by affordance or using language embeddings focused on action possibilities. 

While this improved the alignment somewhat, none of the tested systems fully captured the nuanced way human brains represent locomotive possibilities. The gap between human and machine perception remains wide, at least in this domain.

“When trained specifically for action recognition, they could somewhat approximate human judgments, but the human brain patterns didn’t match the models’ internal calculations,” Dr. Groen said. 

One of the study’s key takeaways is that understanding how we navigate the world requires more than recognizing what’s in it—it requires grasping what we can do within it. Our brains manage this automatically, without conscious thought and without needing explicit instructions.

“Our results suggest that locomotive action affordance perception in scenes relies on specialized neural representations different from those used for other visual understanding tasks,” the researchers wrote. “Training DNNs directly on affordance labels or using affordance-centered language embeddings increases alignment with human behavior, but none of the tested models fully captures locomotive action affordance perception.” 

The study opens new and exciting avenues for improving artificial intelligence systems, particularly those designed to operate in dynamic environments. Better models of affordance perception could revolutionize self-driving vehicles, delivery robots, and even AI assistants in virtual environments, allowing them to interact with the world in more human-like ways. The potential impact of this research on AI’s future is hopeful and exciting.

For now, the work underscores how much we still have to learn from our own biology. AI may have surpassed human capabilities in specific tasks like playing chess or analyzing massive data sets. 

However, we remain far ahead when it comes to the intuitive understanding of space and movement—something that allows a child to clamber over playground equipment or an adult to navigate a crowded city street. This underscores the need for further research and development in AI to bridge this gap.

The results also remind us that intelligence is more than data processing or pattern matching. The human brain sees possibilities for action, weaving together perception and potential in ways that today’s artificial minds are only beginning to grasp.

Studies like this will be essential as AI researchers work to bridge this gap. Scientists hope these new insights will help inspire the next generation of AI systems—ones that can navigate the world with greater efficiency and energy savings, much like the human brain.

“Current AI training methods use a huge amount of energy and are often only accessible to large tech companies,” Dr. Groen said. “More knowledge about how our brain works, and how the human brain processes certain information very quickly and efficiently, can help make AI smarter, more economical and more human-friendly.” 

Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan.  Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com