DDD20: end-to-end DAVIS driving dataset
DAVIS Driving Dataset 2020 (DDD20)
March 2020
Updates
Jan 2021: posted entire ITSC dataset on gdrive; see Downloads section below
DDD20 was developed by the Sensors Group of the Inst. of Neuroinformatics, Univ. of Zurich and ETH Zurich.
Information about other datasets and code are on the Sensors Group webpage.
Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
DDD20 is an expanded release (with more than 4 times as much data as the original DDD17) of the first public end-to-end training dataset of automotive driving using a neuromorphic bioinspired silicon retina event camera, the DAVIS event+frame camera that was developed in the Sensors Group of the Inst. of Neuroinformatics, UZH-ETH Zurich.
DDD20 includes car data such as steering, throttle, and braking, etc. It can be used to evaluate the fusion of frame and event data for automobile driving assistance applications.
See more Inst. of Neuroinformatics Sensors Group datasets here.
Citing DDD20
Hu, Y., Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2020). "DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction". Special session Beyond Traditional Sensing for Intelligent Transportation, The 23rd IEEE International Conference on Intelligent Transportation Systems, September 20 – 23, 2020, Rhodes, Greece. arXiv [cs.CV]. arXiv. http://arxiv.org/abs/2005.08605
Result from paper
Explained variance (EVA) steering prediction from DVS+APS is better than either DVS or APS.
In contrast to previous work, APS frames by themselves provide better prediction than DVS frames.
See Yuhuang's ITSC 2020 talk about DDD20
DDD20 contents
51h of DAVIS+car data
4000 km of driving
216 recordings (175 Ford Focus, USA, 40 Ford Mondeo, Europe)
see DDD20 README for download and software instructions
Download zip archive of maps or click link in the DDD20 Dataset files spreadsheet
Heat map of DDD20 USA recording locations (direct google maps HTML link, can be zoomed for high resolution, street-level density map of all GPS-recorded routes in DDD20)
DDD20 samples
(click for YouTube version)
Samples of DAVIS and driver control data from DDD20
Top left: APS + steering/throttle/etc. Top right: DVS output. Bottom: History of steering/throttle. Output captured from ddd20-utils view.py
Other citations
The earlier, smaller DDD17 was published as
Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2017). DDD17: End-To-End DAVIS Driving Dataset. in ICML’17 Workshop on Machine Learning for Autonomous Vehicles (MLAV 2017) (Sydney, Australia). Available at: arXiv:1711.01458 [cs] http://arxiv.org/abs/1711.01458
The sensor used for DDD20 is the DAVIS based on the original paper below (about a previous generation sensor IC)
Berner, Raphael, Christian Brandli, Minhao Yang, Shih-Chii Liu, and Tobi Delbruck. 2014. “A 240x180 10mW 12us Latency Sparse-Output Vision Sensor for Mobile Applications” In IEEE J. Solid State Circuits. 49(10) p. 2333-2341 10.1109/JSSC.2014.2342715 . Get PDF here.
The DAVIS346 used for DDD17 and DDD20 is published in
Taverni, G., Moeys, D. P., Li, C., Cavaco, C., Motsnyi, V., Bello, D. S. S., et al. (2018). Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison. IEEE Transactions on Circuits and Systems II: Express Briefs (accepted) 65, 677–681. Available at: https://ieeexplore.ieee.org/document/8334288/?arnumber=8334288&source=authoralert.
The DAVIS is based on a seminal neuromorphic event camera Dynamic Vision Sensor (DVS) paper
Lichtsteiner, Patrick, Christoph Posch, and Tobi Delbruck. 2008. “A 128 X 128 120 dB 15 Μs Latency Asynchronous Temporal Contrast Vision Sensor .” IEEE Journal of Solid-State Circuits 43 (2): p. 566–576. doi:10.1109/JSSC.2007.914337. Get PDF here.
DDD20 Code (ddd20-utils)
See https://github.com/SensorsINI/ddd20-utils for utilities to work with DDD20 data, and to collect new data from Ford OpenXC vehicle.
DDD20 Dataset files and Download instructions
Download Resilio Sync DDD20 dataset (includes DDD17). See DDD20 README for detailed download and software instructions.
For your convenience, we put one recording from DDD20 of 800s of Los Angeles street driving, aug04/rec1501902136.hdf5 [link] in Google Drive for you to try it. (Warning: 2GB 7z compressed, 5.4 GB uncompressed). See spreadsheet below for details.
For your convenience, in Jan 2022, we put the entire dataset of recordings used for ITSC paper into this google drive.
The "DDD20 Ford Focus and Mondeo Davis Driving dataset file descriptions" spreadsheet (below) includes links to maps. See the "ITSC paper files" sheet for paper dataset files and maps.
See for detailed download and software instructions. See also this description of data fields of the HDF5 recording data fields contributed by Dalia Hareb.
statistics of recordings
Some data provided and visualized by DDD20
Examples of steering wheel angle prediction, comparing DVS+APS, APS, and DVS with ground truth
Go directly to folder of videos of steering wheel angle predictions on all DDD20 paper dataset files
Collecting DDD20
Flat tire in New Mexico
Lizardhead pass between Telluride and Cortez
Melted camera case
The setup inside Ford Focus
Nighttime python debugging
Acknowledgments
We thank D Rettig, A Stockklauser, G Detorakis, G Burman, and DD Delbruck for co-piloting; and J Anumula for help with data analysis; www.inilabs.com provided device support. We are grateful to the 2017 Telluride Neuromorphic Engineering Workshop for providing the opportunity to collect this dataset.
This work was funded by Samsung via the Neuromorphic Processor Project (NPP), the Swiss National Competence Center in Robotics (NCCR Robotics), and the EU projects SEEBETTER and VISUALISE