University of Pennsylvania researchers have unveiled HoloRadar, an AI-driven system that uses radio waves to allow robots to ‘see’ around corners without a direct line of sight.
Instead of relying on visible-light environments like other non-line-of-sight (NLOS) perception approaches, the new approach works in variable lighting conditions, including total darkness.
The Penn research team behind HoloRadar suggests their approach could aid the development of safer autonomous driving systems. HoloRadar could also improve the performance of automated robotic platforms used in factories, warehouses, and other congested, high-traffic environments.
“HoloRadar is designed to work in the kinds of environments robots actually operate in,” explained Mingmin Zhao, Assistant Professor in Computer and Information Science (CIS) and senior author of a paper describing the robot that can see around corners. “This system is mobile, runs in real time, and doesn’t depend on controlled lighting.”

Before HoloRadar, NLOS Systems Faced Technological Challenges
According to a statement from the Penn researchers, other teams have demonstrated systems capable of visualizing hidden obstacles, but those approaches require visible light to function correctly. That’s because those NLOS systems analyze the patterns made by shadows or other indirect reflections to ‘see’ around corners.
Some efforts have attempted to utilize radio signals, but those approaches relied on slow, bulky scanning equipment. The approach has also often been perceived as a disadvantage since radio waves have longer wavelengths that can limit the resolution. However, the Penn team saw potential in these longer wavelengths.
“Because radio waves are so much larger than the tiny surface variations in walls, those surfaces effectively become mirrors that reflect radio signals in predictable ways,” study co-author and CIS doctoral student Haowen Lai explained.
In theory, these longer wavelengths should be able to bounce off walls, floors, and ceilings before carrying that information back to a robot that can translate it into a map of the hidden location, allowing it to see around corners.
“It’s similar to how human drivers sometimes rely on mirrors stationed at blind intersections,” says Zitong Lan, a doctoral student in Electrical and Systems Engineering (ESE) and co-author of the paper. “Because HoloRadar uses radio waves, the environment itself becomes full of mirrors, without actually having to change the environment.”
How AI Reverses the Reflection Process to Build a 3D Image of Hidden Objects
Before constructing the final HoloRadar system, the Penn team started by addressing the main limitation of these systems. Specifically, radio waves that bounce around multiple times before returning to the robot create what the team described as a “tangled set of reflections” that can stymie traditional signal-processing approaches.
“In some sense, the challenge is similar to walking into a room full of mirrors,” Lan said. “You see many copies of the same object reflected in different places, and the hard part is figuring out where things really are.”
To address this problem, the team built a hybrid model that combines machine learning with physics-based modelling. The process begins by enhancing the resolutions of the raw radio signals. According to the team, this process helps the system identify multiple signal “returns” that correspond to the different reflection paths taken by the radio waves,
Next, the HoloRadar core intelligence traces those reflected signals backward. The team said this step undoes the “mirror-like effects” of the hidden location and lets HoloRadar reconstruct the actual three-dimensional scene. The result is an AI that can distinguish between direct and indirect radio wave reflections, ultimately determining the correct physical location of objects and people that are hidden around the corner
“Our system learns how to reverse that process in a physics-grounded way,” Lan explained.
“Robots Need to See Beyond What’s Directly in Front of Them”
In a series of tests, the Penn team tested HoloRadar on a mobile robot. To simulate the environments where robots that could benefit from seeing around the corner are already in use, such as factories and warehouses, the test included areas with hallways and corners.
According to the team’s statement, the HoloRadar-equipped robot “successfully reconstructed walls, corridors, and hidden human subjects” that were located outside the robot’s direct line of sight.
When discussing possible applications of their approach, the Penn team said HoloRadar is not designed to replace current options. Instead, they said their approach adds an “additional layer of perception” to robotic platforms already equipped with LIDAR to sense objects in their field of view.
“Robots and autonomous vehicles need to see beyond what’s directly in front of them,” Zhao explained. “This capability is essential to help robots and autonomous vehicles make safer decisions in real time.”
While the current HoloRadar system has been successful indoors, the team plans to explore outdoor environments such as urban streets and intersections. Such environments challenge robotic systems with longer distances and more dynamic conditions, which current approaches struggle to handle.
“This is an important step toward giving robots a more complete understanding of their surroundings,” Zhao said. “Our long-term goal is to enable machines to operate safely and intelligently in the dynamic and complex environments humans navigate every day.”
The study “Non-Line-of-Sight 3D Reconstruction with Radar” was presented at the 39th annual Conference on Neural Information Processing Systems (NeurIPS).
Christopher Plain is a Science Fiction and Fantasy novelist and Head Science Writer at The Debrief. Follow and connect with him on X, learn about his books at plainfiction.com, or email him directly at christopher@thedebrief.org.
