Engineers from the University of Pennsylvania’s School of Engineering and Applied Science have unveiled PanoRadar, an innovative radar-based imaging system designed to equip robots with vision beyond the limitations of traditional cameras and sensors.
PanoRadar enables robots to “see” their surroundings with a level of detail comparable to LiDAR technology but by using radio frequency (RF) waves. This novel approach means robots can operate effectively in conditions where optical sensors would typically fail, such as in low light, fog, or dusty environments.
The breakthrough could revolutionize industries relying on robotics, from healthcare and warehouse management to search and rescue missions.
In a paper set to be presented at the 2024 International Conference on Mobile Computing and Networking (MobiCom), researchers revealed PanoRadar, emphasizing the system’s ability to bring new robustness and accuracy to autonomous navigation.
“Our field tests across different buildings showed how radio sensing can excel where traditional sensors struggle,” Yifei (Freddy) Liu, an undergraduate research assistant and study co-author, said in a release. “The system maintains precise tracking through smoke and can even map spaces with glass walls.
Robots equipped with cameras or LiDAR have become familiar in warehouses and factories; however, these sensors have limitations.
Cameras require good lighting and are prone to interference from particles in the air, while LiDAR sensors, which use lasers, can be ineffective in environments with dust, smoke, or extreme lighting conditions.
In contrast, researchers say PanoRadar can harness the unique properties of radio waves to circumvent these issues, offering a resilient and high-resolution solution for robotic vision.
PanoRadar operates with a single-chip millimeter-wave radar that rotates on a motorized platform, building a dense, cylindrical array of virtual antennas as it turns.
As it rotates, this array emits and collects radio waves, constructing a comprehensive 3D view of the environment. According to the study, PanoRadar achieves a level of imaging detail that rivals LiDAR, capturing complex features of an environment, such as walls, floors, and human figures, in real time with impressive accuracy.
Beyond its structural prowess, PanoRadar uses advanced machine learning algorithms to enhance its detail imaging capabilities. The machine learning models are specially trained to compensate for the limitations inherent in RF technology, such as lower resolution in the vertical dimension.
These algorithms allow PanoRadar to generate high-resolution 3D images of its surroundings, providing data for complex visual recognition tasks like object detection and semantic segmentation.
“The key innovation is in how we process these radio wave measurements,” Dr. Mingmin Zhao, an assistant professor in the Computer and Information Science department at Penn State and study co-author, explained. “Our signal processing and machine learning algorithms are able to extract rich 3D information from the environment.”
The development of PanoRadar stems from a clear need: equipping robots with the ability to navigate and operate in environments where visual data may be impaired.
This need is particularly pronounced in fields such as search and rescue, where robots are often deployed in smoke-filled or low-visibility areas. With PanoRadar’s RF-based imaging capabilities, autonomous systems can operate more reliably, accurately distinguishing between objects, obstacles, and even people in challenging conditions.
The innovation is built on years of RF and imaging research and leverages commercially available hardware to keep the system compact and affordable.
As Penn Engineering’s team notes, PanoRadar’sPanoRadar uses commercially available, off-the-shelf components—such as a single-chip millimeter-wave radar and a standard motor—making it both cost-effective and mobile, ideal for real-world robotic applications.
In tests conducted across 12 buildings on the Penn campus, PanoRadar consistently provided high-accuracy 3D mapping data, showing strong performance even when the robot was in motion. This reliability is a key feature for tasks where mobility is crucial, such as robots assisting in logistics or navigating crowded spaces.
Initial research focused on testing PanoRadar in indoor environments. However, researchers say the system holds promise for other settings, including autonomous driving scenarios. “These applications remain exciting topics for future studies,” they wrote.
One of PanoRadar’s critical achievements is its seamless integration of machine learning, which enhances the precision and reliability of the RF imaging system.
Traditional RF imaging systems have struggled with creating clear, high-resolution images due to the limitations of RF sensors, which generally have lower resolution than optical sensors.
However, by training machine learning models on paired LiDAR and RF data, PanoRadar’s developers found a way to bridge this gap, enabling RF images to achieve near-LiDAR accuracy.
The machine learning aspect of PanoRadar does more than improve image resolution. It also allows the system to interpret its surroundings at a level previously unattainable for RF-based systems.
For instance, the algorithms can recognize indoor spaces’ typical structures and textures, like walls, stairs, and floors, even in environments with little to no light. This advancement opens the door for RF-based visual recognition applications previously achievable only with high-end optical sensors.
PanoRadar’s potential extends far beyond laboratory testing, with a range of applications across diverse industries. For example, in healthcare, robots equipped with PanoRadar could safely navigate hospital corridors at night, delivering supplies without disturbing patients.
In warehouse management, robots could autonomously traverse dusty or cluttered spaces, performing inventory management with higher efficiency than before. Meanwhile, in search and rescue, autonomous systems could enter buildings filled with smoke, locating people or exits when visibility is severely reduced.
In recent years, businesses and educational institutions have begun experimenting with autonomous security robots to enhance safety, surveillance, and operational efficiency. These robots patrol hallways, monitor perimeters, and can respond swiftly to unusual activities.
With PanoRadar, autonomous security robots could perform effectively in well-lit spaces and areas with limited visibility, such as dimly lit parking lots, enhancing their utility in real-world security applications across campuses, offices, and large industrial sites.
Future iterations of PanoRadar could incorporate even higher-resolution RF sensors and optimized machine-learning models, reaching new levels of detail and expanding the system’s applicability further.
“For high-stakes tasks, having multiple ways of sensing the environment is crucial,” Dr. Zhao said. “Each sensor has its strengths and weaknesses, and by combining them intelligently, we can create robots that are better equipped to handle real-world challenges.”
By releasing PanoRadar’s code and dataset, the research team encourages further innovation and development in RF-based vision. They aim to build a foundation for other researchers and developers to create even more refined RF imaging systems.
“We anticipate that this work, along with the released dataset, will encourage further research and development in RF-based imaging technologies,” researchers concluded. “Providing a robust yet cost-effective alternative to existing imaging technologies such as LiDAR and cameras.”
Tim McMillan is a retired law enforcement executive, investigative reporter and co-founder of The Debrief. His writing typically focuses on defense, national security, the Intelligence Community and topics related to psychology. You can follow Tim on Twitter: @LtTimMcMillan. Tim can be reached by email: tim@thedebrief.org or through encrypted email: LtTimMcMillan@protonmail.com