Intention Aware Robot Crowd Navigation
with Attention-Based Interaction Graph
Shuijing Liu*, Peixin Chang*, Zhe Huang, Neeloy Chakraborty, Kaiwen Hong, Weihang Liang,
D. Livingston McPherson, Junyi Geng, and Katherine Driggs-Campbell
University of Illinois, Urbana-Champaign
In ICRA 2023
Simulation Demo
The yellow circle is the robot, other circles are humans
Real-world Demo
(tutorial and code here)
Abstract
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds. Most previous reinforcement learning (RL) based methods fail to consider different types of interactions among all agents or ignore the intentions of people, which results in performance degradation. In this paper, we propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents through space and time. To encourage longsighted robot behaviors, we infer the intentions of dynamic agents by predicting their future trajectories for several timesteps. The predictions are incorporated into a model-free RL framework to prevent the robot from intruding into the intended paths of other agents. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios. We successfully transfer the policy learned in simulation to a real-world TurtleBot 2i.
Method
Previous works in crowd navigation usually encompass at least one of the following problems:
They fail to consider all types of observed interactions in the crowd, which leads to performance degrade in dense and highly interactive crowds.
They only consider the past and current state of humans without explicitly predicting their future trajectories. As a result, the robot sometimes exhibits unsafe or shortsighted behaviors.
To address these two problems, we propose
A novel graph neural network that uses attention mechanism to effectively capture the spatial and temporal interactions among heterogeneous agents.
A novel method to incorporate the predicted intentions of other agents into a model-free RL framework. Our method is compatible with any trajectory predictor.
For more details, please refer to our paper.
Results in Simulation
Here are some example episodes of our method compared with baselines and ablations.
(Notes: The robot is the yellow disk, and the robot’s goal is the red star. We outline the borders of the robot sensor range with dashed lines. Represented as empty circles, the humans in the robot’s field of view are blue and those outside are red. The ground truth future trajectories and personal zones are in gray and are only used to visualize intrusions, and the predicted trajectories are in orange.)
Results of methods without predictions, plus they only consider robot-human interactions without human-human interactions:
Results of our method without predictor, and with different types of predictors:
Demo Video (Simulation + Real World)
ICRA 2023 Presentation
Try It Yourself
To train our method or test a pretrained model, our code is publicly available at https://github.com/Shuijing725/CrowdNav_Prediction_AttnGraph.
Citation
@inproceedings{liu2023intention,
author={Liu, Shuijing and Chang, Peixin and Huang, Zhe and Chakraborty, Neeloy and Hong, Kaiwen and Liang, Weihang and McPherson, D. Livingston and Geng, Junyi and Driggs-Campbell, Katherine},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
title={Intention Aware Robot Crowd Navigation with Attention-Based Interaction Graph},
year={2023},
pages={12015-12021}
}