1Shenzhen Key Laboratory of Robotics Perception and Intelligence, Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China.
2Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China.
Abstract: Navigation in human-robot shared crowded environments remains challenging, as robots are expected to move efficiently while respecting human motion conventions. However, many existing approaches emphasize safety or efficiency while overlooking social compliance. This article proposes Learning-Risk Model Predictive Control (LR-MPC), a data-driven navigation framework that balances efficiency, safety, and social compliance. LR-MPC consists of two stages: an offline risk learning phase, where a Probabilistic Ensemble Neural Network (PENN) is trained using risk data from a heuristic MPC-based baseline (HR-MPC), and an online adaptive inference phase, where local waypoints are sampled and globally guided by a Multi-RRT planner. Each candidate waypoint is evaluated for risk by PENN, and predictions are filtered using epistemic and aleatoric uncertainty to ensure robust decision-making. The safest waypoint is selected as the MPC input for real-time navigation. Extensive experiments demonstrate that LR-MPC outperforms baseline methods in success rate and social compliance, enabling robots to navigate complex crowds with high adaptability and low disruption.
NAMR-RRT Framework
Baseline
LR-MPC
Real-world Experiment Video