GNM: A General Navigation Model to Drive Any Robot

Dhruv Shah*, Ajay Sridhar*, Arjun Bhorkar, Noriaki Hirose, Sergey Levine

UC Berkeley

International Conference on Robotics and Automation (ICRA) 2023

London, UK

Check out General Navigation Models for the latest updates and follow-up work on GNM!

Learning provides a powerful tool for vision-based navigation, but the capabilities of learning-based policies are constrained by limited training data. If we could combine data from all available sources, including multiple kinds of robots, we could train more powerful navigation models. In this paper, we study how a general goal-conditioned model for vision-based navigation can be trained on data obtained from many distinct but structurally similar robots, and enable broad generalization across environments and embodiments. We analyze the necessary design decisions for effective data sharing across robots, including the use of temporal context and standardized action spaces, and demonstrate that an omnipolicy trained from heterogeneous datasets outperforms policies trained on any single dataset. We curate 60 hours of navigation trajectories from 6 distinct robots, and deploy the trained GNM on a range of new robots, including an underactuated quadrotor. We find that training on diverse data leads to robustness against degradation in sensing and actuation. Using a pre-trained navigation model with broad generalization capabilities can bootstrap applications on novel robots going forward, and we hope that the GNM represents a step in that direction.

Method Overview








GNM Deployed in the Real-World

Note that the GNM has not seen training data from any aerial robot (top left) or the Vizbot (bottom left), yet manages to control them successfully!

For more experiment videos, click here!

Robustness to Perturbations

We find that the GNM policy is robust to perturbation in sensor placement, leading to different viewpoints.


Shown on right, we deploy the same model on robots with different camera heights and find that it can drive the robot to the goal successfully in all cases.


Robustness to Physical Degradation

We also test GNM's robustness to physical degradation, which is often encountered over the life of the robot's deployment.


In this situation, a bark of wood gets lodged in the tire, affecting it's ability to track a simple straight-line action and leading to collision. GNM is robust to such failure modes and can compensate for this, sucessfully driving the robot to the goal.


BibTeX

@inproceedings{shah2022gnm

    author    = {Dhruv Shah and Ajay Sridhar and Arjun Bhorkar and Noriaki Hirose and Sergey Levine}, 

    title     = {{GNM: A General Navigation Model to Drive Any Robot}}, 

    booktitle = {International Conference on Robotics and Automation (ICRA)}, 

    year      = {2023},

    url      = {https://arxiv.org/abs/2210.03370} 

}