SLAM with event-based camera for drone navigation

Objective:

Development an algorithm for SLAM implementing an event-based camera for high-speed object detection.

Resume:

Drones have a lot of applications in several areas, where the control and sensor systems are very important. A vision system is some like the eyes for us, it can provide visual information about the environment. However, a drone can fly to high-speed and needs to obtain information quickly. The main inconvenient of conventional cameras is the frame rate, since the obtained images are blurred by the moving drone. Moreover, if the drone movement is fast, the conventional cameras present a phenomenon called blur which causes a loss of scene information.

Figure 1a. An image captured by the event-based camera.

Figure 1b. Events from event-based camera marked in red and blue.

To solve this, there is a new camera technology called event-based cameras [1], which can detect the brightness change per pixel (events) with low latency, allowing a fast and asynchronous detection of events. To wit, this type of camera detects a brightness change by pixel and then the data is captured only in that specific pixel; if various pixels are activated at the same time, the image captures all that information as a set of events (see Figure. 1), this allows a faster acquisition of information, since only the information from activated pixels is stored and it improves the acquisition time. Due to the pixels are activated when a brightness change occurs, the acquisition is asynchronous and faster.

Figure 2. A 2D SLAM using an event-based camera.

For the problem of Simultaneous Localization and Mapping (SLAM) it is important to share information from frame to frame, since for localization and mapping, the data allows to link the images sequence and then to estimate a pose and the map. Some works proposed an event-based camera for SLAM systems. Weikersdorfer et al. in [2] proposed a 2D SLAM, where the robot is in constant movement; the pose and map are generated using features observed on the ceiling. In this work, the depth of the ceiling is constant (Figure 2). On the other hand, in [3] Weikersdorfer et al. also proposed EB-SLAM-3D (see Figure 3), which is a combination of event-based camera and RGB-D sensor for 3D sparse points. They applied a modified particle filter and then, they estimate the current pose and create a map.

Figure 3 EB-SLAM-3D map and trajectory from [3].

The proposal is to develop an algorithm for event-based camera for implementing it in a SLAM system taking advantage of the qualities of the camera for a low-cost computational and low-time processing. It must be implemented in a quadrotor UAV available at the LAB.

References

[1] J. Conradt, R. Berner, M. Cook and T. Delbruck, "An embedded AER dynamic vision sensor for low-latency pole balancing," 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, 2009, pp. 780-785.

[2] Weikersdorfer, David & Hoffmann, Raoul & Conradt, Jorg. (2013). Simultaneous Localization and Mapping for Event-Based Vision Systems. 7963. 133-142. 10.1007/978-3-642-39402-7_14.

[3] D. Weikersdorfer, D. B. Adrian, D. Cremers and J. Conradt, "Event-based 3D SLAM with a depth-augmented dynamic vision sensor," 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014, pp. 359-364.

State of the project

Project started in 2018; it is currently being developed.

Students involved in the project:

At the present time (October 2019) one PhD candidate is involved in this project.

Publications from the team:

  1. A Stereo Vision and Semantic Segmentation Approach for SLAM in Dynamic Outdoor Environments. Daniela Esparza and Gerardo Flores; in preparation.

Financial support:

CIO

This thesis is developed in collaborations with PhD, master and undergraduate students who are LAB members.