DDD20: end-to-end DAVIS driving dataset

DAVIS Driving Dataset 2020 (DDD20) 

March 2020

Updates

DDD20  was developed by the Sensors Group of the Inst. of Neuroinformatics, Univ. of Zurich and ETH Zurich. 

Information about other datasets and code  are on the Sensors Group webpage.

Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

DDD20 is an expanded release (with more than 4 times as much data as the original DDD17) of the first public end-to-end training dataset of automotive driving using a neuromorphic bioinspired silicon retina event camera, the  DAVIS event+frame camera that was developed in the Sensors Group of the Inst. of Neuroinformatics, UZH-ETH Zurich. 

DDD20 includes car data such as steering, throttle, and braking, etc. It can be used to evaluate the fusion of frame and event data for automobile driving assistance applications.

See more Inst. of Neuroinformatics Sensors Group datasets here.

Citing DDD20

Hu, Y., Binas, J., Neil, D., Liu, S.-C., and Delbruck, T. (2020).  "DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction".  Special session Beyond Traditional Sensing for Intelligent Transportation, The 23rd IEEE International Conference on Intelligent Transportation Systems, September 20 – 23, 2020, Rhodes, Greece.   arXiv [cs.CV]. arXiv. http://arxiv.org/abs/2005.08605 

Result from paper

Explained variance (EVA) steering prediction from DVS+APS is better than either DVS or APS. 

In contrast to previous work, APS frames by themselves provide better prediction than DVS frames.

See Yuhuang's ITSC 2020 talk about DDD20

DDD20 contents

DDD20 samples

(click for YouTube version)

Samples of DAVIS and driver control data from DDD20

Top left: APS + steering/throttle/etc. Top right: DVS output.  Bottom: History of steering/throttle. Output captured from ddd20-utils view.py

Other citations

The earlier, smaller DDD17 was published as 

The sensor used for DDD20 is the DAVIS based on the original paper below (about a previous generation sensor IC)

The DAVIS346 used for DDD17 and DDD20 is published in

The DAVIS is based on a seminal neuromorphic event camera Dynamic Vision Sensor (DVS) paper

DDD20 Code (ddd20-utils)

See https://github.com/SensorsINI/ddd20-utils  for utilities to work with DDD20 data, and to collect new data from Ford OpenXC vehicle.

DDD20 Dataset files and Download instructions

DDD17+ Ford Focus and Mondeo Davis Driving dataset file descriptions

See for detailed download and software instructions. See also this description of data fields of the HDF5 recording data fields contributed by Dalia Hareb.

DDD17+ (DVS Driving Dataset 2017 Plus) README

statistics of recordings

Some data provided and visualized by DDD20

Examples of steering wheel angle prediction, comparing DVS+APS, APS, and DVS with ground truth

Go directly to folder of videos of steering wheel angle predictions on all DDD20 paper dataset files

J Binas presentation at ICML17 on original DDD17 dataset

ICML17 / DDD17

T Delbruck DDD20 summary slides

DDD20 DAVIS driving dataset

Collecting DDD20

Flat tire in New Mexico

Lizardhead pass between Telluride and Cortez

Melted camera case

The setup inside Ford Focus

Nighttime python debugging

Acknowledgments

We thank  D Rettig, A Stockklauser, G Detorakis, G Burman, and DD Delbruck for co-piloting; and J Anumula for help with data analysis; www.inilabs.com  provided device support. We are grateful to the 2017 Telluride Neuromorphic Engineering Workshop for providing the opportunity to collect this dataset.

This work was funded by Samsung via the Neuromorphic Processor Project (NPP),  the Swiss National Competence Center in Robotics (NCCR Robotics), and the EU projects SEEBETTER and VISUALISE