ViNG: Learning Open-World Navigation with Visual Goals

Berkeley Artificial Intelligence Research

International Conference on Robotics and Automation (ICRA) 2021

Xi'an, China

Abstract

We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform. Learning provides an appealing alternative to conventional methods for robotic navigation: instead of reasoning about environments in terms of geometry and maps, learning can enable a robot to learn about navigational affordances, understand what types of obstacles are traversable (e.g., tall grass) or not (e.g., walls), and generalize over patterns in the environment. However, unlike conventional planning algorithms, it is harder to change the goal for a learned policy during deployment. We propose a method for learning to navigate towards a goal image of the desired destination. By combining a learned policy with a topological graph constructed out of previously observed data, our system can determine how to reach this visually indicated goal even in the presence of variable appearance and lighting. Three key insights, waypoint proposal, graph pruning and negative mining, enable our method to learn to navigate in real-world environments using only offline data, a setting where prior methods struggle. We instantiate our method on a real outdoor ground robot and show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning, including other methods that incorporate reinforcement learning and search. We also study how ViNG generalizes to unseen environments and evaluate its ability to adapt to such an environment with growing experience. Finally, we demonstrate ViNG on a number of real-world applications, such as contactless last-mile delivery and autonomous inspection.


Video

Real-world Demonstrations with ViNG

Last-mile Mail Delivery

Contactless Pizza Delivery

Autonomous Inspection

Qualitative Results from Generalization Experiments

We evaluate ViNG in four new outdoor environments. For each, we collect a few dozen minutes of experience to adapt the distance function and relative pose predictor. Then, given a goal image (last column, checkerboard location in aerial view) and an initial observation (third column), the robot attempts to navigate to the goal. Columns 4--7 indicate that the robot succeeds in reaching the goal image. Light blue lines indicate the actions taken by ViNG.

Fast Adaptation to Novel Environments

After training ViNG in one environment, we deploy the system in a novel environment, shown above. By practicing to reach self-proposed goals and using that experience to finetune the controller, ViNG is able to quickly gain competence at reaching distance goals in this new environment, using just 60 minutes of experience. Example rollouts towards a goal 35m away (marked by checkerboard circle) demonstrate ViNG self-improving from interactions in the barracks environment.

Quantitative Results


Simulation Experiments

ViNG is substantially more successful at reaching distance goals than all offline baselines, while performing competitively with SoRB, a popular online baseline combining Q-learning and topological graphs. We emphasize that SoRB and PPO require online data collection, making them prohibitively expensive in the real-world. Further, whereas ViNG requires 40 hours of offline data, SoRB requires 200 hours of online data, and must recollect this data for every experiment.


Real-robot Experiments

While all non-random methods successfully reach nearby goals, only ViNG reaches goals over 40 meters away.

Ablations

We investigate design choices for the parametrization of the controller. Using waypoints as a mid-level action space is key to the performance of ViNG, which is particularly emphasized for distant goals. While training the models, we show that ViNG can be trained with either supervised or TD learning and report no significant difference in performance. We also show that the two key ideas presented -- graph pruning and negative sampling -- are indeed essential for the performance of ViNG in the real-world.

BibTeX

@inproceedings{

shah2021ving,

title={{ViNG: Learning Open-World Navigation with Visual Goals}},

author={Dhruv Shah and Benjamin Eysenbach and Gregory Kahn and Nicholas Rhinehart and Sergey Levine},

booktitle={IEEE International Conference on Robotics and Automation (ICRA)},

year={2021},

url={https://arxiv.org/abs/2012.09812}

}