Zhiming Chen*, Haozhe Fang*, Jiapeng Chen, Michael Yu Wang, Hongyu Yu
Detecting moving events produced by moving objects is a crucial task in the realms of autonomous driving and mobile robots. Moving objects have the potential to create ghost artifacts in mapped environments and pose risks to autonomous navigation. LiDAR serves as a vital sensor for autonomous systems due to its ability to provide dense and precise range measurements. However, existing LiDAR datasets often lack sufficient discussion on the motion labeling of moving objects, containing only a limited representation of moving entities within a single scene. Furthermore, the methodologies for Moving Event Detection (MED) on LiDAR sensors have not been comprehensively explored or evaluated. To address these gaps, this study focuses on constructing a diverse LiDAR moving event dataset encompassing multiple scenes with a high density of moving objects. A thorough review of current MED techniques is conducted, followed by the establishment of a performance benchmark based on evaluating these methods using our dataset. Additionally, part sequences of the dataset are utilized to host an online MED competition, aimed at fostering collaboration within the research community and advancing related studies.
We collect 10 sequences from high-fidelity simulation / real world:
We put the dataset on this GitHub repository. For more details about the dataset and downloading, please visit the release of this repo.
You can visit here to download the datasets.
We test 8 state-of-the-art algorithms on sequences {00, 01, 02} to set up a performance benchmark.
Removert, ERASOR, and Octomap are offline non-learning methods;
Dynablox, DOD, and M-Detector are online non-learning methods;
MotionBEV and InsMOS are representatives of two branches of learning-based methods.
We use sequences 05-09 to host a Moving Event Detection competition on CodaLAB. Please click Moving Event Detection Competition on MOE LiDAR Dataset to view the competition page.