A Drivable Area and Road Anomaly Detection Benchmark for Ground Mobile Robots

This is a drivable area and road anomaly detection benchmark for ground mobile robots, such as robotic wheelchairs and sweeping robots. The details of the benchmark are introduced in our IEEE T-CYB paper. This benchmark is constructed based on our GMRP dataset, which contains 3896 pairs of images with ground truth labels for drivable areas and road anomalies. The dataset can be downloaded from here, and the details of the dataset are introduced in our IEEE RA-L paper.

This benchmark also provides a detailed performance comparison between 14 state-of-the-art networks and our DFM-RTFNet, with respect to 6 types of training data including our transformed disparity images. We train each single-modal network with 11 setups. Specifically, we first train each with input RGB, disparity, normal, elevation, HHA and our transformed disparity images (denoted as RGB, Disparity, Normal, Elevation, HHA and T-Disp), respectively. Then we train each with input concatenation of RGB and the other 5 types of training data separately, denoted as RGB+D, RGB+N, RGB+E, RGB+H and RGB+T, followed by training each data-fusion network with same five setups. The corresponding quantitative performance comparison are presented in the following figures.

The performance comparison among 10 single-modal networks (FCN, SegNet, U-Net, PSPNet, DeepLabv3+, DenseASPP, UPerNet, DUpsampling, ESPNet and GSCNN) with 11 training data setups on our GMRP dataset, where the best result is highlighted in each subfigure.

The performance comparison among 4 data-fusion networks (FuseNet, MFNet, depth-aware CNN and RTFNet) and our DFM-RTFNet with 5 training data setups on our GMRP dataset, where D-A CNN is short for depth-aware CNN, and the best result is highlighted in each subfigure.