Large-scale Multi-robot Localization and Mapping Dataset
Large-scale Multi-robot Localization and Mapping Dataset
G. Shimoda^1***, M. Lisondra^2***, I. Mehta, S. Saeedi^1
1 Toronto Metropolitan University 2 University of Toronto
*** Both authors contributed equally to this research
We provide a dataset for distributed heterogeneous multiple-robot localization and mapping, as well as distributed pose graph optimization with communication constraints. The dataset is designed to facilitate research and development in the field of multi-robot systems by offering a comprehensive collection of sensor data and ground truth information. The dataset includes data from 15+ heterogeneous ground robots equipped with various sensors such a LiDAR, RGB-D and stereo camera, April tags, wheel encoders, and IMU. The localization is achieved using a system of 14 Vicon motion capture cameras, ensuring highly precise ground truth information. The planning phase involves predefined paths for the robots, and experimentation consists of driving the robots along these paths. The dataset consists of both raw sensor data, as well as pose graph data gathered by capturing multiple April tags observed in the environment. The availability of this dataset will accelerate progress in the multi-robot domain and enable researchers to overcome the challenges posed by the high cost and physical space constraints associated with multi-robot experimentation. A multi-robot dataset with this many robots, to our knowledge, has not been collected before.
Recent advancements in robotic collaboration has sparked a new wave of novelty in the extension of traditional robotics problems to multi robot scenarios. However, innovative research in this space is rather difficult due to the high-cost for a fleet of precise robots to gather data. Similarly, physical space constraints for testing and deployment is also a major time-delay when attempting to solve robotics problems in this sphere. We aim to provide researchers, as have others before us, with the tools to conveniently test their developed algorithms to accelerate problem solving in the multi-robot domain. We develop a dataset for distributed pose graph optimization and heterogeneous localization and mapping for ad-hoc and collaborative applications.
We provide the dataset of 10+ heterogeneous robots with accurate ground truth positions, LiDAR, RGBD/Stereo camera, April tags, wheel encoders and IMU. This is the first dataset to our knowledge to use more than 8 robots for visual-inertial odometry and SLAM, as well as the first to collect pose-graph data from multiple robots.
We use the Turtlebot3 Burger and Waffle models and add modifications to them in order to record visual/inertial data, as well as capture ground-truth trajectories. The main additions are the Realsense D435i camera which allows us to capture RGB-D and stereo images at 30fps, as well as IMU at 400Hz. We also use a Jetson Nano to power the D435i camera and store their data into a rosbag. We add Vicon markers in unique shapes at the top of each robot so the Vicon cameras can accurately track the ground-truth trajectories of each robot separately.
In these experiments, we use Turtlebot3 Burger and Waffle models. This image shows the front view of a Burger model fitted with a Realsense D435i camera, WiFi antenna, an AprilTag bundle, and Vicon markers.
Rearview of a Burger robot, showing the Jetson Nano board, which is used to power the Realsense camera.
Example image of a few of the robots used in this experiment
An image of the Vicon motion capture environment, showing 14 Vicon aero cameras on the ceiling recording a 6mx6m grid environment.
An example trajectory where 3 robots move in concentric circles
An example trajectory where 4 robots move in squares.
If you have any questions, feel free to reach out to us at the following email us at: gshimoda@torontomu.ca
@mastersthesis{shimoda2024,
title = {Large-scale Multi-robot Localization and Mapping Dataset},
author = {Glenn Takashi Shimoda},
year = 2024,
address = {Toronto, Ontario, Canada},
school = {Toronto Metropolitan University},
type = {Master's thesis}
}