Our paper supplies raw DVS noise data recorded under bright and dim lighting, and uses this (as in v2e) to form a practical runtime model of DVS noise generation.
The leak noise rate is about 0.2 Hz/pixel while the shot noise rate is about 10 Hz/pixel, which is 50X higher than the leak noise rate. Under low illumination, shot noise events dominate the BA.
We provide theory and code for 3 concrete families of DVS denoising algorithms:
The FWF/DWF is a pair of very cheap filters that requires only a few hundred bytes of memory and small amount of logic. FWF uses a finite window of past events' location to determine if a new event is signal or noise. DWF splits FWF's window evenly into two windows to store signal and noise events seperately. They are very effective for sparse scenes like those encountered in surveillance.
The key insight of DWF is that the events in the window can dynamically cluster localized activity (such as moving persons in a surveillance scene), to allow accurate classification of foreground and background activity. This way, DWF can remove nearly all the noise events from the scene background without dropping the signal events from the moving objects.
The STCF is a generalization of the most widely used Background Activity Filter (BAF). Instead of requiring only a single event, it requires k events; typically k=2 or 3 are very effective for many types of applications. Its memory requirement scales exactly with the number of pixels. It is already included in the camera logic circuits in some inivation DAVIS cameras. and is a standard filter in inivation's DV software.
The MLPF is a very powerful but economical neural network denoiser. Its memory requirement scales with the number of DVS output pixels plus a a few kB for weights of the 2-layer multilayer perceptron. It is driven by little patches of input centered around the current event as illustrated.
The MLPF has nearly the same space complexity as the STCF, but is better at preserving signals due to learning structural cues in the local spatiotemporal window of past events. And it is more than 10000 times cheaper than prior machine learning denoising methods.
Our paper derives accurate theory for how well the DWF and STCF filters filter out noise events, and experimentally demonstrates their validity.
The FWF false positive (noise falsely classified as signal) as a function of the FWF window length L in events and the neighborhood size sigma in pixels. Larger L or sigma results in bigger false positive rate.
The STCF false positive rate (FPR) as a function of correlation time tau and event count k. The STCF reduces the FPR by a factor of 1000 compared to the prior art background activity filter (BAF) by using k=4 events.
Our paper supplies several useful datasets to study DVS denoising. The main datasets are a hotel bar scene and a driving scene, both are very clean so we can add accurately-modeled DVS noise to them.
We supply several additional datasets including recordings of actual DVS noise under dim and bright lighting.
Our accurate DVS noise models allow us to generate synthetic DVS responses and add accurately-modeled DVS noise to these ideal events. That way, we can for the first time quantitatively characterize how well a denoising method can discriminate signal and noise events. Our ROC curves clearly demonstrate that our MLPF and STCF are better than others at achieving superior combination of true positive (signal is classified as signal) and true negative (noise is classified as noise), and that DWF provides powerful desnoising at extremely low cost for sparse scenes. Moreover the Area Under the Curve (AUC) plot shows that both MLPF and STCF withstand increased noise much better than previously reported methods.
The jAER project for event sensors includes AbstractNoiseFilter and concrete subclasses for all our reported denoising methods (BAF, STCF, etc). Users can interactively visualize and measure denoising accuracy (TPR and FPR) for each method and inject synthetic noise or recorded noise during runtime. See our Benchmarking page for more details.