DDD20: end-to-end DAVIS driving dataset
DAVIS Driving Dataset 2020 (DDD20)
March 2020
Updates
Nov 2024: Moved dataset to ETH Research Collection and reorganized DDD17 and DDD20 to one share
Jan 2021: posted entire ITSC dataset on gdrive; see Downloads section below
DDD20 was developed by the Sensors Group of the Inst. of Neuroinformatics, Univ. of Zurich and ETH Zurich.
Information about other datasets and code are on the Sensors Group webpage.
Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
DDD20 is an expanded release (with more than 4 times as much data as the original DDD17) of the first public end-to-end training dataset of automotive driving using a neuromorphic bioinspired silicon retina event camera, the DAVIS event+frame camera that was developed in the Sensors Group of the Inst. of Neuroinformatics, UZH-ETH Zurich.
DDD20 includes car data such as steering, throttle, and braking, etc. It can be used to evaluate the fusion of frame and event data for automobile driving assistance applications.
See more Inst. of Neuroinformatics Sensors Group datasets here.
Citing DDD20
Y. Hu, J. Binas, D. Neil, S.-C. Liu, and T. Delbruck, “DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), ieeexplore.ieee.org, Sep. 2020, pp. 1–6. doi: 10.1109/ITSC45102.2020.9294515. Available: http://dx.doi.org/10.1109/ITSC45102.2020.9294515
Also at https://arxiv.org/pdf/2005.08605
Main result from paper
Explained variance (EVA) steering prediction from DVS+APS is better than either DVS or APS.
In contrast to previous work, APS frames by themselves provide better prediction than DVS events, but DVS events improve prediction.
Related citations
The earlier, smaller DDD17 was published as
Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2017). DDD17: End-To-End DAVIS Driving Dataset. in ICML’17 Workshop on Machine Learning for Autonomous Vehicles (MLAV 2017) (Sydney, Australia). Available at: arXiv:1711.01458 [cs] http://arxiv.org/abs/1711.01458
The sensor used for DDD20 is the DAVIS based on the original paper below (about a previous generation sensor IC)
Berner, Raphael, Christian Brandli, Minhao Yang, Shih-Chii Liu, and Tobi Delbruck. 2014. “A 240x180 10mW 12us Latency Sparse-Output Vision Sensor for Mobile Applications” In IEEE J. Solid State Circuits. 49(10) p. 2333-2341 10.1109/JSSC.2014.2342715 . Get PDF here.
The DAVIS346 used for DDD17 and DDD20 is published in
Taverni, G., Moeys, D. P., Li, C., Cavaco, C., Motsnyi, V., Bello, D. S. S., et al. (2018). Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison. IEEE Transactions on Circuits and Systems II: Express Briefs (accepted) 65, 677–681. Available at: https://ieeexplore.ieee.org/document/8334288/?arnumber=8334288&source=authoralert.
The DAVIS is based on a seminal neuromorphic event camera Dynamic Vision Sensor (DVS) paper
Lichtsteiner, Patrick, Christoph Posch, and Tobi Delbruck. 2008. “A 128 X 128 120 dB 15 Μs Latency Asynchronous Temporal Contrast Vision Sensor .” IEEE Journal of Solid-State Circuits 43 (2): p. 566–576. doi:10.1109/JSSC.2007.914337. Get PDF here.
DDD20 Download instructions
Download DDD20 by using this link (ETH Research Collections) and follow the "Access the files" link . See DDD20 README below for detailed dataset instructions.
For your convenience, in Jan 2022, we put the entire dataset of recordings used for the DDD20 ITSC paper into this google drive.
DDD20 contents
51h of DAVIS+car data
4000 km of driving
216 recordings (175 Ford Focus, USA, 40 Ford Mondeo, Europe)
See DDD20 README for download and software instructions
Download zip archive of maps or click link in the DDD20 Dataset files spreadsheet or view an example map driving down the Pasadena freeway.
Heat map of DDD20 USA recording locations (direct google maps HTML link, can be zoomed for high resolution, street-level density map of all GPS-recorded routes in DDD20)
DDD20 samples
(click for YouTube version)
Samples of DAVIS and driver control data from DDD20
Top left: APS + steering/throttle/etc. Top right: DVS output. Bottom: History of steering/throttle. Output captured from ddd20-utils view.py
DDD20 Dataset files
The "DDD20 Ford Focus and Mondeo Davis Driving dataset file descriptions" spreadsheet link (below) includes links to maps. See the "ITSC paper files" sheet for paper dataset files and maps.
DDD17+ Ford Focus and Mondeo Davis Driving dataset file descriptions
DDD20 Code (ddd20-utils)
See https://github.com/SensorsINI/ddd20-utils for utilities to work with DDD20 data, and to collect new data from Ford OpenXC vehicle.
DDD20 README
See DDD20 README for more detailed instructions. See also this description of data fields of the HDF5 recording data fields contributed by Dalia Hareb.
Some data provided and visualized by DDD20
Examples of steering wheel angle prediction, comparing DVS+APS, APS, and DVS with ground truth
Go directly to folder of videos of steering wheel angle predictions on all DDD20 paper dataset files
Collecting DDD20
Flat tire in New Mexico
Lizardhead pass between Telluride and Cortez
Melted camera case
The setup inside Ford Focus
Nighttime python debugging
Acknowledgments
We thank D Rettig, A Stockklauser, G Detorakis, G Burman, and DD Delbruck for co-piloting; and J Anumula for help with data analysis; www.inilabs.com provided device support. We are grateful to the 2017 Telluride Neuromorphic Engineering Workshop for providing the opportunity to collect this dataset.
This work was funded by Samsung via the Neuromorphic Processor Project (NPP), the Swiss National Competence Center in Robotics (NCCR Robotics), and the EU projects SEEBETTER and VISUALISE