RESEARCH STATEMENTS
RESEARCH STATEMENTS
Minseong Choi*, Seungho Han*, Jeonguk Kang, Seunhoon Yang, Minyoung Lee, Keun Ha Choi and Kyung-Soo Kim. "4D Radar-Camera Based Vector Map SLAM Using Dynamic Object Removal Mask", IEEE Transactions on Intelligent Vehicles, 2024 (* : equal contribution, under review)
[paper link] [youtube]
1 - A. Research Background
Why Vector Map?
The vector map, which is a simplified representation of HD map including road information (lane, road marks, ...), is required for autonomous driving by following reasons,
Global path planing
Lane boundary cost function in potential field
Localization assistant with lane tracking
Why 4D Radar?
The recently emerging high-resolution 4D imaging radar sensor, which has been drawing attention in the academic community, is a complementary sensor with cameras for the following reasons,
3D depth information (cheaper than LiDAR sensor)
Doppler velocity information
Robustness in harsh environments
1 - B. Research Objectives
Research on a real-time vector map SLAM for autonomous driving based on camera and 4D radar, composed of
1. Improved 4D Radar-Visual Odometry
with enhanced precision and robustness under dynamic environments like urban areas
2. Vector Map Generation and Loop Closing via 4D Radar and Camera
with enhanced precision of mapping by ground plane estimation
minimizing trajectory drift error during loop closure
1 - C. Visual-4D Radar Odometry Using Dynamic Object Removal Masks (DORM)
I proposed a robust and complementary visual-4D radar odometry for dynamic environments, such as urban areas. Monocular visual odometry suffers from the metric scale ambiguity issue and vulnerability to moving features. The Dynamic Object Removal Masks (DORM) were suggested for rejecting moving features by drawing circles around the points projected to the query image from 4D radar Doppler velocity measurements. As shown in video, it can be observed that the proposed method exhibits better robustness in dynamic environments compared to conventional 5-point RANSAC visual odometry. Additionally, the scale ambiguity issue was addressed through ego velocity estimation from 4D radar. The performance of the proposed method has been validated to outperform other SLAM techniques through real-world implementation experiments. Please refer to our paper for detailed methodology and results. [paper link] [youtube]
1 - D. Vector Mapping through IPM via Vanishing Point Corrected Ground Plane
Most vector mapping studies utilize IPM (Inverse Perspective Mapping), which requires knowledge or estimation of the ground plane information. While ground plane estimation can be achieved from 4D radar data, it is not accurate enough for IPM due to noise caused by multi-path reflections. Therefore, we conducted ground plane correction based on vanishing point tracking. Our proposed method showed quantitatively better performance compared to conventional methods using LiDAR-based ground plane estimation as a pseudo-ground truth. As depicted in the figure, the generated vector maps are sufficient for leveraging loop matching, which will be detailed in the next section. We performed real-time mapping of a 3D vector map using IPM from lane masks detected from 2D images. Please refer to our paper for detailed methodology and results. [paper link] [youtube]
1 - E. Loop Closing Pose Graph Optimization utilizing vector map and 4D radar Z projection image
Loop detection for pose graph optimization is crucial part of SLAM for minimizing trajectory drift. However, relying soley on vector maps is insufficient for loop detection due to their similar duplicated shapes. Therefore, additional 4D radar data will be utilized for loop detection, given its wider FOV. Because 4D radar only looks forward unlike LiDAR, we adopted Z projection image for encoding 4D radar data into data base. We introduced double-check loop detection using both 4D radar (coarse) and vector map (fine), followed by matching with the vector map for elaborate pose graph optimization. Experimental results validated that the proposed double check loop detection shows better performance than conventional strategies. Please refer to our paper for detailed methodology and results. [paper link] [youtube]
Minseong Choi, Seunhoon Yang, Seungho Han, Minyoung Lee, Keun Ha Choi and Kyung-Soo Kim. "MSC-RAD4R: ROS-Based Automotive Dataset With 4D Radar", IEEE Robotics and Automation Letters, 2023.
2 - A. Research Background
Recently, several 4D radar datasets have been published due to its high resolution and robustness in extreme weather conditions. However, most of the recent 4D radar dataset publications have insufficient odometry sensors, which is advantageous for 4D radar odometry research. To tackle this problem, we presents a 4D radar dataset with various odometry sensors based on a robot operating system (ROS) framework called MSC-RAD4R, which stands for Motivated for SLAM in City, ROS-based Automotive Dataset With 4D Radar. Our dataset at a glance includes 98,786 pairs of stereo images, 60,562 frames of LiDAR data, 90,864 frames of 4D radar data, 60,570 frames RTK-GPS, 60,559 frames of GPS, 1,211,486 frames of IMU data and 6,057,276 wheel data covering approximately 51.6km and 100 mnutes of automotive data in various environments including day, night, snow and smoke.
2 - B. Sensor Configuration
2 - C. Sensor Calibration
Multisensor calibration is important in sensor fusion algorithm. The intrinsic parameters, distortion coefficients and extrinsic parameters of the stereo camera are obtained from the MATLAB computer vision toolbox. The intrinsic parameters and extrinsic parameters of the IMU are calibrated by the imu tools package and Kalibr package. For LiDAR-camera calibration, we proposed LiDAR-camera calibration method that is applicable both high and low resolution LiDAR with tilted plate. Also, 4D Radar-camera calibration was performed through consecutive triangular reflectors. [paper link] [youtube] [dataset homepage]
2 - D. Various Environments
A vector map SLAM for autonomous driving via camera and 4D radar was suggested, composed of
1. Dynamic Objects Removal Mask (DORM) based Visual-4D Radar odometry,
with enhanced robustness in dynamic environments by generating masks based on dynamic objects detected by the 4D radar
with scale ambiguity issue addressed monocular visual odometry through the ego-velocity estimation from the 4D radar
2. Camera and 4D Radar Based Vector Map Generation and Loop Closing,
with vector map generated by enhanced accuracy of ground plane from the 4D radar by vanishing point correction
with more precise loop closing was achieved by utilizing the Z projection image from the 4D radar and a vector map.
About Dynamic Objects Removal Mask (DORM) based Visual-4D Radar Odometry,
Optimization technique such as bundle adjustment is required for improving performance.
More precise 4D radar denoising technique is required for addressing noise from multi-path reflection.
About Camera and 4D Radar Based Vector Map Generation and Loop Closing,
More accurate lane/road marking detection including semantic information (ex. cross walk, arrow, stop line, center line, ...) is required.
More accurate loop matching method is required (even in similar vector map case).
Adding ego-velocity factor for pose graph optimization is required for enhancing performance.