A Multimodal-Heterogeneous Dataset for Ground and Aerial Cooperative Localization and Mapping
The dataset provides LIDAR , stereo images, IMU and GPS in ground and aerial perspectives collected in the Guangzhou Campus of Sun Yat-sen University.
All sensors were well-calibrated and triggered by a self-made synchronization module, and the synchronization accuracy can reach millisecond level.
Centimetre-level RTK GNSS ground truth for localization are provided.
Many encounters are designed for inter-robots in the spatial dimensions, which facilitates ground-air heterogeneous Cooperative Simultaneous Localization And Mapping (C-SLAM).
Normal on-campus streetscape, including a lot of dynamic obstacles
Lidar cannot detect / lacks reliable visual features
Lots of repetition scenes
Tree canopies provide unreliable features, while roof tiles reflect sunlight, making lighting changes more drastic
Known more about datasets (acquisition platforms, sensors, or etc.), click here.
Authors Yilin Zhu, Yang Kong, Yingrui Jie, Shiyou Xu and Hui Cheng
Paper GRACO: A Multimodal Dataset for Ground and Aerial Cooperative Localization and Mapping
Bibtex
@article{DBLP:journals/ral/ZhuKJXC23,
author = {Yilin Zhu and Yang Kong and Yingrui Jie and Shiyou Xu and Hui Cheng},
title = {GRACO: A Multimodal Dataset for Ground and Aerial Cooperative Localization and Mapping},
journal = {{IEEE} Robotics Autom. Lett.},
volume = {8},
number = {2},
pages = {966--973},
year = {2023}
}