WP coleader
The VORTEX project proposes a new approach for exploring unknown indoor environments using a fleet of autonomous drones (UAVs). We propose to define a strategy based on swarm intelligence exploiting only vision-based behaviors. The fleet will deploy as a dynamic graph self reconfiguring according to events and discovered areas. Without requiring any mapping or wireless communication, the drones will coordinate by mutual perception and communicate by visual signs. This approach will be developed with RGB and event cameras to achieve fast and low-energy navigation.Performance, swarm properties, and robustness will be evaluated by building a demonstrator extending a quadrotor prototype developed in the consortium.
WP leader
This project aims at increasing the navigation autonomy of Search-And-Rescue drones while preserving their energy autonomy. This requires improving existing Simultaneous Localization And Mapping (SLAM) and obstacle-avoidance algorithms already employed on drones. Towards this goal, we advocate the enhancement of sensing and processing tasks through low-energy hardware such as event cameras and Field-Programmable Gate Arrays (FPGAs) and to design SLAM and obstacle-avoidance algorithms in a way that capitalize deep neural network (DNNs) architectures that are adapted to this new hardware. This project will prototype such an integrated system that will be made available to the scientific community to allow further investigations on the opportunities brought by this novel concept of drone architecture.
Supervisor
Project NEOTIC aims to foster research and development on the basis of an emerging, bio-inspired computer vision paradigm known as neuromorphic vision using event-based cameras. The latter are known for surpassing conventional sensing modalities used in computer vision in terms of energy efficiency and sensing rate, making them particularly suited for embedded processing on-board air and ground vehicles. These unique properties that result from the asynchronous and independent capturing of brightness changes at each pixel, further bear an advantage in privacy-preserving computer vision tasks. Consequently, this unconventional, asynchronous and sparse output of event-cameras has prompted an interest for its exploitation in the context of the state-of-the-art, machine learning research paradigm, namely, deep neural-networks. In view of the recency of event-cameras, however, the full potential of deep-learning is not yet fully unlocked as new dedicated algorithms and models need to be developed, notorious for being data-demanding. To mitigate this bottleneck, the goal of NOETIC is to leverage conventional camera imagery and models as a proxy for deep-neural networks operating on event data. In particular, we advocate the use of transfer learning methods that can largely compensate for the lack of data of a new domain via abundant data from an existing one. With respect to the domain mismatch between frames and events, we will investigate adaptation schemes that leverage prior sensor knowledge, for example via feature alignment so as to attain space invariance and enable the use of a single architecture for both modalities. Finally, we will explore the complementarity of the frame-based and event-based modalities via early or late fusion schemes. In terms of application scope, NOETIC aims at the integration of neuromorphic vision in outdoor and indoor environments, such as autonomous vehicles and smart living technology,respectively.
Co-supervisor
The project aims to develop a computer vision-based approach for functional capacity evaluation (FCE), namely, the assessment of a person’s ability to perform activities of daily living and work tasks. Such a system can be generalized to automatically evaluate physical rehabilitation exercises, notably in the context of supervision of patients who are recovering from surgeries, or for treating various musculoskeletal disorders.