Planning and navigation in highly dynamic environments
ROBOCOMPLEX
ROBOCOMPLEX
Deep reinforcement learning will be the main basic technique used for all the sub objectives. The ROS platform will be used for the implementation and experiments will be performed with real robots. The specific tasks are:
Cooperative Reinforcement learning for collaborative and adversarial behaviors.
Multi-robot cooperative collision avoidance.
Uncertainty-based deep reinforcement learning robot navigation.
Integrating 3D-DOVS rule-based and DRL planners for dynamic environments.
A set of autonomous navigation novel techniques for highly dynamic environments, in the context of human-centered and social navigation: (i) RUMOR, based on Deep Reinforcement Learning and using a descriptive robocentric velocity space model to extract the dynamic environment information, enhancing training effectiveness and scenario interpretation; (ii) AVOCADO, which poses an adaptive control problem that aims at adapting in real-time to the cooperation degree of other robots and agents. Adaptation is achieved through a novel nonlinear opinion dynamics design that relies solely on sensor observations; (iii) SHINE, a novel motion planner that selects the best topology class that imitates human behaviour using a deep neural network.
An occupation-aware UGV trajectory planner for monitoring mission in dynamic environments (MADA). The robot identifies areas with moving obstacles and obtains paths avoiding densely occupied dynamic regions based on their occupation.
A safe, robust and high manoeuvrability 3D navigation technique (DWA-3D) for UAV in a priori unknown, cluttered environments. We provide a theoretical-empirical method for adjusting the parameters of the objective function to optimize, easing the classical difficulty for tuning them.