Abstract: Due to a lack of safety considerations, a wide range of Multi-Agent Reinforcement Learning (MARL) applications are limited in real-world environments. Thus, ensuring MARL safety is essential and urgent in the domain. However, merely a few studies consider the safe MARL problem, and the investigation of real-world applications using safe MARL algorithms still needs to be improved. To fill this gap, we provide a framework with soft constrained policy optimization, in which we develop practical algorithms to address the problem in a cooperative game setting. First, the problem formulation of safe MARL is introduced. Second, the safe policy optimization of safe MARL algorithms based on soft constrained optimization is analyzed, and we further propose a safe learning framework for safe MARL. The framework can be plugged into MARL algorithms without manually fine-tuning safety bounds. Third, we investigate the sim-to-real problems, and conduct simulation and real-world experiments to evaluate the effectiveness of our algorithms. Finally, the comprehensive experimental results indicate that our method has significant benefits regarding the balance between reward and safety performance and outperforms several strong baselines.
The robot needs to follow the simulation and conduct a touch task while ensuring safety.
The two robots work together to finish Peg-in-Hole tasks while ensuring safety (The red areas denote the unsafe areas).
Compare our method (with action smoothness) with a baseline (without action smoothness).