EventZoom: Learning to Denoise and Super Resolve Neuromorphic Events

Peiqi Duan, Zihao W. Wang, Xinyu Zhou, Yi Ma, Boxin Shi*

(Contact us: duanqi0001@pku.edu.cn)

We implemented a display-camera system to study event formation and degradation. The display has been divided into 5 segments with two 1×, two 2× and one 4× resolution scales.


We address the problem of jointly denoising and super resolving neuromorphic events, a novel visual signal that represents thresholded temporal gradients in a space-time window. The challenge for event signal processing is that they are asynchronously generated, and do not carry absolute intensity but only binary signs informing temporal variations. To study event signal formation and degradation, we implement a display-camera system which enables multi-resolution event recording. We further propose Event-Zoom, a deep neural framework with a backbone architecture of 3D U-Net. EventZoom is trained in a noise-to-noise fashion where the two ends of the network are unfiltered noisy events, enforcing noise-free event restoration. For resolution enhancement, EventZoom incorporates an event-to-image module supervised by high resolution images. Our results showed that EventZoom achieves at least 40× temporal efficiency compared to state-of-the-art event denoisers. Additionally, we demonstrate that EventZoom enables performance improvements on applications including event-based visual object tracking and image reconstruction. EventZoom achieves state-of-the-art super resolved image reconstruction results while being 10× faster.

EventZoom results. Blue/red: positive/negative events.




    author    = {Duan, Peiqi and Wang, Zihao W. and Zhou, Xinyu and Ma, Yi and Shi, Boxin},

    title     = {EventZoom: Learning To Denoise and Super Resolve Neuromorphic Events},

    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},

    month     = {June},

    year      = {2021},

    pages     = {12824-12833}





[14] Galoogahi Hamed Kiani, Fagg Ashton, Huang Chen, Ramanan Deva, and Lucey Simon. Need for speed: A benchmark for higher frame rate object tracking. In Int. Conf. Comput. Vis., pages 1134–1143, 2017.












Visual object tracking results. Red box is the ground truth bounding box, green box is the predicted bounding box. 

Comparison of event-based image reconstruction performance on our EventNFS dataset.