iSUMA (Mejoras en comprensión automática de escenas mediante modalidades múltiples de sensores y percepción activa).
The problem of automatically interpreting the surroundings of intelligent systems, scene understanding, is a widely studied problem that includes all kinds of recognition tasks from different sensor data. There is still a significant research gap to be filled before bringing solutions to this problem into many real world application domains. This project focuses on two relevant challenges: to improve existing techniques for scene understanding from single sensor, considering different sensor modalities and limited availability of training data; to present novel solutions for multi-sensor set ups, with a strong focus on active perception approaches.
Efficient Scene Understanding from different sensor modalities. Current state of the art for recognition methods presents impressive results, but often requires large sets of labeled training data which are not always available for certain sensor modalities or specific application domains. iSUMA targets new models and algorithms to perform different recognition tasks in challenging scenarios, emphasizing the importance of efficiency at different aspects.
Efficient recognition from event camera data. EvT+. [Paper][Code]
Activity recognition in dark scenarios (no light). EventSleep. More details here.
Underwater scene understanding with limited training. [Paper]
Multi-camera and multi-modal scene understanding. Developing new perception and learning methods, iSUMA uses different sensor modalities differently and more effectively than existing solutions, exploiting their complementary strengths in novel ways. Another goal is to design efficient distributed configurations of the set of sensors that ease low-cost information sharing mechanisms in large scale scenarios over time, space and data.
Semantic segmentation with Hyperspectral images. SpectralWaste. More details here.
Understanding scenes with people using multi-camera systems. More details here.
Certifiable Optimal and Distributed Estimation Algorithms. More details here.
Active perception to improve the capabilities of scene understanding systems. New planning algorithms that enable one or more sensors to improve the overall understanding of dynamic and possibly unknown environments. iSUMA intends to go beyond state of the art methods using low-level information like 3D sparse features, to more advanced scene specifications.
Active perception for autonomous cinematography with drones. More details here.
Active perception and Robust semantic fusion. More details here.
Learning solutions for multi-robot problems. More details here.
Multi-robot path planning based on Petri Net systems. [pdf] [pdf]