EDFLOW21 is the dataset and codebase associated with the paper
Liu, Min, and Tobi Delbruck. 2022. “EDFLOW: Event Driven Optical Flow Camera with Keypoint Detection and Adaptive Block Matching.” IEEE Transactions on Circuits and Systems for Video Technology (early access): 1–1. https://doi.org/10.1109/tcsvt.2022.3156653 .
See preprint PDF.
We would appreciate citations to this paper.
Developed by the Sensors Group of the Inst. of Neuroinformatics, Univ. of Zurich and ETH Zurich.
Information about other datasets and tools are on the Sensors Group webpage.
Optical flow and keypoint tracking are the cornerstones of visual navigation for robotics. Our EDFLOW camera drives an accurate flow computation with keypoints detected from brightness change events from a neuromorphic event camera, so it works well even with fast motion under bad lighting; see EDFLOW videos. EDFLOW uses the same technology for computing flow as the high quality flow used in MPEG video compression technology. Block matching is avoided by most computer vision researchers because it is very expensive to compute on conventional computers, but our EDFLOW logic circuits do more than 1200 math operations per clock cycle to compute the multi-scale block matching. Our block matching flow measurements are as accurate as an expensive deep convolutional network. EDFLOW completely offloads the expensive flow computation to the camera, leaving the robot CPU free for general purpose computing and its GPU free for AI inference.
(a) 50 ms 3D space-time Dynamic Vision Sensor (DVS) event cloud from a car dashboard. (b) Automatically exposed slice of brightness change events with magnification of one corner. Grid shows areas for event slice exposure control. (c) The corner shape on the event count slice for the keypoint. (d) The slice with all detected keypoints overlaid. (e) Illustration of the basic (single-scale) block-matching optical flow. (f) Snapshot of 50 ms of final optical flow vector output from the camera computed at the detected keypoints. (g) Feedback control of slice exposure duration.
Event cameras such as the Dynamic Vision Sensor (DVS) are useful because of their low latency, sparse output, and high dynamic range. This paper proposes a DVS+FPGA camera platform and uses it to demonstrate the hardware implementation of event-based corner keypoint detection and adaptive block-matching optical flow. To adapt sample rate dynamically, events are accumulated in event slices using the area event count slice method. The area event count is feedback controlled by the average optical flow matching distance. Corners are detected by streaks of accumulated events on event slice rings of 1 and 2 pixels. Corner detection takes only 6 clock cycles (16 MHz event rate at the 100MHz clock frequency). At the corners, flow vectors are computed in 100 clock cycles (1 MHz event rate). The multiscale block match size is 25x25 pixels and the flow vectors span up to 30-pixel match distance over the 3 matching scales. The FPGA processes the sum-of-absolute distance block matching at 123 GOp/s, the equivalent of 1230 Op/clock cycle. EDFLOW is much more accurate on MVSEC drone and driving optical flow benchmarking sequences than the previous best DVS FPGA optical flow implementation, and achieves similar accuracy to the CNN-based EV-Flownet,
Min Liu <minliu@ini.uzh.ch> (Sensors Group, INI, UZH-ETH Zurich)
Tobi Delbruck <tobi@ini.uzh.ch> (Sensors Group, INI, UZH-ETH Zurich)