Active Vision For Human Robot Collaboration

Topics: Deep Reinforcement Learning, Robotics, Computer Vision, HRI, Multi Agent Systems

The aim of this project is to create adaptive artificial systems with exploratory skills and active perception capabilities flexible enough to tackle complex social environments (physical or virtual).

Unstructured social environments, e.g. building sites, release an overwhelming amount of information yet behaviorally relevant variables ​may be not directly accessible, because of occlusions or other sensor limits.

Adaptive control of the sensors is a key solution found by nature to cope with such problems, as shown by the foveal anatomy of the eye and its high mobility and control accuracy.

In this project we are using and developing Machine Learning methodologies, such as Deep Reinforcement Learning, to endow robots with similar active perception capabilities and enable them to collaborate with humans in complex environments.