Efficient multi-robot navigation with lightweight policy via deep reinforcement learning
Xingrong Diao and Jiankun Wang
Abstract
In this article, we present an end-to-end collision avoidance policy based on deep reinforcement learning (DRL) for multi-agent systems, demonstrating encouraging outcomes in real-world applications. In particular, our policy calculates the control commands of the agent based on the raw Lidar observation. We propose a multi-agent training platform based on a physics-based simulator further to bridge the gap between simulation and the real world. The policy is trained on a policy-gradients-based RL algorithm in a dense and messy training environment. A new reward function is introduced to address the issue of agents choosing suboptimal actions in some common scenarios. Though the data used for training is exclusively from the simulation platform, the policy can be successfully deployed in real-world robots. Finally, our policy effectively responds to intentional obstructions and avoids collisions.
Bibtex
@article{Xingrong2024Efficient,
title={Efficient multi-robot navigation with lightweight policy via deep reinforcement learning},
author={Xingrong Diao and Jiankun Wang},
journal={},
year={2024}
}
Acknowledgements
This work is supported by Shenzhen Science and Technology Program under Grant RCBS20221008093305007, Grant 20231115141459001, and Young Elite Scientists Sponsorship Program by CAST under Grant 2023QNRC001.