Posted in | News | Atmospheric Robotics

Radio Waves Redefine Robotic Perception

Researchers at the University of Pennsylvania School of Engineering and Applied Science (Penn Engineering) have introduced PanoRadar, a novel tool designed to enhance robotic vision by converting basic radio waves into detailed, 3D representations of the environment.

Radio Waves Redefine Robotic Perception
Freddy Liu (EAS’25), Haowen Lai (Gr’28), and Mingmin Zhao, Assistant Professor in CIS, from left, set up a robot equipped with PanoRadar for a test run. Image Credit: Sylvia Zhang.

Developing reliable perception systems for robots has been challenging due to the need to operate in harsh weather and difficult environments. Traditional light-based vision sensors, such as cameras and LiDAR (Light Detection And Ranging), struggle in conditions like dense smoke or fog.

However, nature shows that vision is not necessarily limited by light. Many organisms have evolved to sense their environment without relying on light. For example, bats navigate by interpreting sound wave echoes, and sharks detect electrical fields generated by the movements of their prey.

Unlike light waves, radio waves have much longer wavelengths, enabling them to penetrate smoke, fog, and certain materials—capabilities that surpass human vision. Despite this, robots have typically relied on a limited range of tools: cameras and LiDAR, which provide high-resolution images but fail in challenging conditions, or traditional radar, which can penetrate walls and obstructions but produces lower-resolution images.

A New Way to See

Our initial question was whether we could combine the best of both sensing modalities. The robustness of radio signals, which is resilient to fog and other challenging conditions, and the high resolution of visual sensors.

Mingmin Zhao, Assistant Professor, Computer and Information Science, University of Pennsylvania

In a paper set to be presented at the 2024 International Conference on Mobile Computing and Networking (MobiCom), Zhao and his team from the Wireless, Audio, Vision, and Electronics for Sensing (WAVES) Lab and the Penn Research in Embedded Computing and Integrated Systems Engineering (PRECISE) Center—comprising doctoral student Haowen Lai, recent master’s graduate Gaoxiang Luo, and undergraduate research assistant Yifei (Freddy) Liu—detail how PanoRadar utilizes radio waves and artificial intelligence (AI) to help robots navigate in challenging environments, such as smoke-filled buildings or foggy roads.

Spinning Like a Lighthouse

PanoRadar functions similarly to a lighthouse, sweeping a beam in a circular motion to scan the full horizon. This system features a rotating vertical array of antennas that survey the environment. As these antennas rotate, they emit radio waves and detect reflections from surrounding objects, akin to a lighthouse beam revealing ships and coastal landmarks.

Enhanced by AI, PanoRadar advances beyond basic scanning. Unlike a lighthouse that merely illuminates areas as it rotates, PanoRadar intelligently integrates measurements from every rotation angle to improve imaging resolution. Although the sensor itself is significantly more affordable than conventional LiDAR systems, this rotation technique produces a dense array of virtual measurement points, enabling PanoRadar to achieve an imaging resolution on par with LiDAR.

The key innovation is in how we process these radio wave measurements. Our signal processing and machine learning algorithms are able to extract rich 3D information from the environment.

Mingmin Zhao, Assistant Professor, Computer and Information Science, University of Pennsylvania

Teaching the AI

A major challenge faced by Zhao’s team was creating algorithms that could sustain high-resolution imaging as the robot is in motion.

To achieve LiDAR-comparable resolution with radio signals, we needed to combine measurements from many different positions with sub-millimeter accuracy. This becomes particularly challenging when the robot is moving, as even small motion errors can significantly impact the imaging quality.

Haowen Lai, Doctoral Student, University of Pennsylvania

The team also tackled the challenge of enabling their system to effectively interpret the visual data it captures.

Luo says, “Indoor environments have consistent patterns and geometries. We leveraged these patterns to help our AI system interpret the radar signals, similar to how humans learn to make sense of what they see.”

In the training phase, the machine learning model used LiDAR data to validate its interpretations against real-world information, allowing it to refine its accuracy over time.

Our field tests across different buildings showed how radio sensing can excel where traditional sensors struggle,” says Liu. “The system maintains precise tracking through smoke and can even map spaces with glass walls.”

Since radio waves are less obstructed by airborne particles, PanoRadar can detect objects that LiDAR often misses, such as glass surfaces. Its high resolution further enables precise detection of people, a crucial capability for applications like autonomous vehicles and rescue operations in challenging environments.

In future work, the team intends to investigate how PanoRadar could integrate with other sensing technologies, such as cameras and LiDAR, to build more comprehensive, multi-modal perception systems for robots. They are also broadening their testing to encompass different robotic platforms and autonomous vehicles.

Zhao says, “For high-stakes tasks, having multiple ways of sensing the environment is crucial. Each sensor has its strengths and weaknesses, and by combining them intelligently, we can create robots that are better equipped to handle real-world challenges.”

This research was carried out at the University of Pennsylvania School of Engineering and Applied Science, with support from a faculty startup fund.

Giving Robots Superhuman Vision Using Radio Signals

PanoRadar works like a lighthouse, with a rotating sensor that emits radio waves, whose echoes are processed by AI into an accurate, 3D image of the surroundings. Video Credit: Haowen Lai and Freddy Liu.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.