1Shenzhen Key Laboratory of Robotics Perception and Intelligence, Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China.
2Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China.
Abstract: Navigation in human-robot shared crowded environments remains challenging, as robots are expected to move efficiently while respecting human motion conventions. However, many existing approaches emphasize safety or efficiency while overlooking social awareness. This article proposes Learning-Risk Model Predictive Control (LR-MPC), a data-driven navigation algorithm that balances efficiency, safety, and social awareness. LR-MPC consists of two phases: an offline risk learning phase, where a Probabilistic Ensemble Neural Network (PENN) is trained using risk data from a heuristic MPC-based baseline (HR-MPC), and an online adaptive inference phase, where local waypoints are sampled and globally guided by a Multi-RRT planner. Each candidate waypoint is evaluated by PENN, which predicts a risk that reflects both safety and human comfort, as social cues are embedded in the learned risk signal during training. Predictions are further refined through the filtering of epistemic and aleatoric uncertainty to ensure reliable decision-making. The most suitable waypoint is provided to the MPC controller for real-time execution. Extensive experiments demonstrate that LR-MPC outperforms baseline methods in success rate and social awareness, enabling robots to navigate complex crowds with high adaptability and low disruption.
LR-MPC Framework
Baseline
Real-world Experiment Video