AquaticVision: Benchmarking Visual SLAM in Underwater Environment with Events and Frames
AquaticVision: Benchmarking Visual SLAM in Underwater Environment with Events and Frames
This is the first underwater dataset contains events, frames and IMU data while also provides groundtruth files to help evaluate the performance of underwater visual SLAM or odometry. We hope this dataset can enable researchers to validate the localization accuracy and performance of their algorithms in underwater environments
Intrinsit and extrinsic prameters aquired by using Kalibr
Easy sequences:
Clear water, corals evenly distributed, trajectory covers underwater area
Clear water, corals centrally distributed, cross-pattern trajectory
Hard sequences:
Event cameras have unique advantages in visual representation of complex underwater scenes. Time Surface (TS) is a commonly used event representation method which we used to represent event data in our dataset. The following videos show the TS generated under different data sequences.
We think that the complementary strengths of event camera and traditional camera may offer a meaningful pathway for developing more robust underwater visual SLAM systems.
If you have any question, please feel free to contact us through following ways:
Yifan Peng, effunwayj@njust.edu.cn
Yuze Hong, 224045004@link.cuhk.edu.cn
Prof. Ziyang Hong, hongziyang@cuhk.edu.cn
Prof. Junfeng Wu, junfengwu@cuhk.edu.cn
Yifan Peng*, Yuze Hong*, Ziyang Hong, Apple Pui-Yi Chui, Junfeng Wu. AquaticVision: Benchmarking Visual SLAM in Underwater Environment with Events and Frames. (Accepted at the Workshop on Field Robotics at 2025 IEEE International Conference on Robotics and Automation (ICRA). https://arxiv.org/abs/2505.03448 )
Our Lab: The Laboratory for Intelligent Autonomous Systems (LIAS) at School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-SZ): https://lias-cuhksz.github.io/