We perform a total of 4 case studies on both unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs). Visit our Case Studies page to see more including time-synchronized videos.
We introduce RadCloud, a novel framework for unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) that integrates real-time radar signal processing with a deep learning model to directly convert lower-resolution radar data into higher-resolution lidar-like 2D point clouds in real-time.
We optimize our framework specifically for the low-cost processors with limited processing power commonly used by UGVs and UAVs by using a radar configuration with 4x lower resolution and a deep learning model with 2.25x fewer parameters compared to previous works.
We utilize a novel chirp-based approach that is more resilient to rapid movements like rapid spins and turns commonly experienced on edge devices such as robots exploring unknown environments.
We demonstrate the real-world practicality and feasibility of our framework using commercially available drones and ground vehicles.
RadCloud is a complete framework for obtaining high-resolution lidar-like 2D point clouds from low-resolution radar point clouds. To make this conversion, we utilize the popular U-Net segmentation model, which we optimize for CPU-only devices. Additionally, we implement everything in a ROS-compatible framework to enable real-time streaming of raw radar data and generation of high-resolution point clouds, even on devices with limited computing capability. Check out the Project Overview page to learn more about how we implemented everything
David Hunt - Duke University
Shaocheng Lou- Duke University
Amir Khazraei - Duke University
Xiao Zhang - Duke University
Spencer Hallyburton - Duke University
TIngjun Chen - Duke University
Miroslav Pajic - Duke University
Our paper can be accessed at this link:RadCloud Paper.
Our dataset and code can be accessed at our github repository or by checking out the Code and Datasets page.
Please see the contact information below for any further questions, clarifications, or inquiries