AI meatspace
Image by enio from Pixabay

The Body Gap: Researchers Warn AI Lacks the Physical Grounding That Shapes Human Thought

As AI and humans move closer to what some describe as the singularity, the distinction between being human and not may hinge on what researchers are calling the “body gap.” A new perspective from a UCLA Health team argues that today’s artificial intelligence systems can describe human experience but fundamentally lack the lived, bodily grounding that shapes how humans think, decide, and behave.

Coined “the body gap,” this may represent not just a philosophical limitation, but a practical concern for AI safety and alignment. The concept hinges on two key ideas: external embodiment (a system’s ability to perceive and act in the world) and internal embodiment (a continuous awareness of internal bodily states such as fatigue, uncertainty, stress, or need).

“Without a body or analogous internal physiology, AI systems may approximate, simulate, or even convincingly reproduce the expression of understanding,” said Akila Kadambi, the study’s first author and a postdoctoral fellow in the Department of Psychiatry and Biobehavioral Sciences at UCLA’s David Geffen School of Medicine. “But Simulation is not the same as experience.”

“On that view, an AI could be highly intelligent, or even sentient in some abstract sense, yet still lack the embodied grounding that characterizes human understanding,” Kadambi said in an email to The Debrief.

At its core, human cognition is shaped by connection, not isolation. The researchers point to even simple actions—like passing the salt across a table—as examples of how behavior is informed by more than the task itself. Such actions draw on a lifetime of sensorimotor experience: how objects feel, how distances are judged, and the social context or intent behind the interaction. In other words, human behavior is deeply contextual. The brain continuously integrates this external input with internal bodily signals, creating a feedback loop that influences attention, memory, and decision-making. According to the UCLA researchers, AI systems lack this internal layer.

“While there is a current focus in world modeling on external embodiment, such as our outward interactions with the world, far less attention is given to internal dynamics, or what we term ‘internal embodiment,’” Kadambi said. “In humans, the body acts as our experiential regulator of the world, as a kind of built-in safety system.”

“If you’re uncertain, if you’re depleted, if something conflicts with your survival, your body registers that,” she added. “AI systems right now have no equivalent. They can sound experiential, whether they should be or not, and that’s a real problem for many reasons, especially when these systems are being deployed in consequential settings.”

Kadambi explains that current AI models can produce human-like language without having any lived experience to match what those words really mean. 

The study also highlights experiments involving point-light displays, in which several advanced AI systems failed to recognize patterns that humans readily identify as human movement. Instead, the systems interpreted them as unrelated shapes or even astronomical phenomena. When the displays were altered, the systems’ performance deteriorated further, suggesting a fragile and unrealistic mode of perception—one that contrasts sharply with the stability provided by embodied human experience.

The researchers argue that this limitation could have implications for human safety as AI systems are deployed more broadly in everyday settings.

“By contrast, current AI systems have no equivalent mechanism. They process inputs and generate outputs without any persistent internal state that regulates how they behave over time,” said Dr. Marco Iacoboni, professor in the Department of Psychiatry and Biobehavioral Sciences at the David Geffen School of Medicine and a senior author on the recent paper.

“This is not just a performance limitation, but also a safety limitation. Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation, or behave consistently.”

To address this issue, the paper proposes a “dual-embodiment framework.” The idea would be to enable AI systems to connect more directly with the external world while also incorporating internal “check-in” processes—loosely analogous to an inner voice—that could help regulate behavior.

The authors also call for the development of new evaluation benchmarks. Rather than focusing solely on whether AI can identify objects or complete tasks, future assessments should examine whether systems can maintain stability under uncertainty and demonstrate self-regulation.

Chrissy Newton is a PR professional and the founder of VOCAB Communications. She currently appears on The Discovery Channel and Max and hosts the Rebelliously Curious podcast, which can be found on YouTube and on all audio podcast streaming platforms. Follow her on X: @ChrissyNewton, Instagram: @BeingChrissyNewton, and chrissynewton.com. To contact Chrissy with a story, please email chrissy @ thedebrief.org.