These videos show the EDFLOW camera and compare the EDFLOW Adaptive Block Matching Optical Flow + Slice Based FAST corner detector (ABMOF+SFAST) and vanilla dense ABMOF with EV-Flownet and the local planes (LP) method.
This page includes more sample sequences not reported in paper to show EDFLOW output on more real world inputs.
This video shows the Davis346Zynq camera and output from it, both live and from the SD card
This sequence of a spinning dot that accelerates from zero to high speed over a short time would be very challenging for conventional frame-based optical flow. It shows EDFLOW's adaptive sample rate varying from 0.5Hz to nearly 1kHz and the adaptive area event count slice exposure number.
The slice accumulation time decreases from 500ms to about 1.5ms as the dot speeds up from 0 to 3.5k px/s.
The area-event-count number decreases from 1000 to about 500 to hold the average matching distance to its target value.
Data is from
S.-C. Liu, B. Rueckauer, E. Ceolini, A. Huber, and T. Delbruck, “Event-Driven Sensing for Efficient Perception: Vision and Audition Algorithms,” IEEE Signal Process. Mag., vol. 36, no. 6, pp. 29–37, Nov. 2019, doi: 10.1109/MSP.2019.2928127. [Online]. Available: http://dx.doi.org/10.1109/MSP.2019.2928127
and is available in our spinningDot folder as an AEDAT-2.0 file. Note that the sample show here is not the same one as used for the paper figure but from the same series of recordings.
The comparisons are on the slider_hdr_far sequence from Elias Mueggler et al. and 3 sequences from the MVSEC dataset from Alex Zhu and colleagues in the Daniilidas lab at UPenn.
The EDFLOW and LP methods in these videos include running accuracy metrics compared with ground truth flow. They were captured with a window length of N=10k events.
See Benchmarking page to see how to measure accuracy.
AEE is Average Endpoint Error, AREE is Average Relative Endpoint Error, and AAE is Average Angular Error. These statistics are measured over the past N events.
Note: By contrast with EV-Flownet, we use what we believe is a more sensible metric of AEE in px/s, not px/frame, since the frame rate is arbitrary for DVS cameras.
Running accuracy stats for slider_hdr_far
See slider_hdr_far folder for sequence in rosbag and AEDAT-2.0 format along with our generated GT flow numpy files and EV-Flownet output.
Source data files for these sequences are available in this folder.
Compares EDFLOW's ABMOF and ABMOF+SFAST optical flow output.
This Davis346 INI run data was recorded by Tobi Delbruck ca 2017 from DAVIS346 camera.
Shows flow from EDFLOW algorithms on fast downlooking DAVIS346 outdoor drone flight data recorded in collaboration with RPG.
The flight test data was recorded in 2018 by Elias Mueggler, Henri Rebecq and Tobi Delbruck.
Shows EDFLOW methods on data collected during the 2017 CapoCaccia Neuromorphic workshop, at the Hotel dei Pini in Alghero, Sardinia.
This hotel-bar data was recorded by Tobi Delbruck.
The car was driving at a speed of 120km/h. The flow is 40,000 pixels/second. A conventional camera running at 100Hz frame rate would have a completely different image for each frame.
The EDFLOW ABMOF algorithm measures this flow using slice times of about 150us/slice.
This 120km/h DVS data was recorded by ETH undergraduates Tomasz Zaluska and Luis Jira working with Min Liu and Tobi Delbruck from a down-looking DVS mounted outside a car aimed at the pavement.
For this sequence, the car accelerated from stop up to a flow of over 4000 px/s. Using ConstantDuration slices with slice adaptation turned on, the slice duration reduced itself from initial value of about 20ms down to less than 2ms, to hold the average block matching distance about 6 pixels.
This accelerating car data was recorded by ETH undergraduates Tomasz Zaluska and Luis Jira working with Min Liu and Tobi Delbruck from a down-looking DVS mounted outside a car aimed at the pavement.