Learning Exploration Policies for Navigation

Abstract

Numerous past works have tackled the problem of task-driven navigation. But, how to effectively explore a new environment to enable a variety of down-stream tasks has received much less attention. In this work, we study how agents can autonomously explore realistic and complex 3D environments without the context of task-rewards. We propose a learning-based approach and investigate different policy architectures, reward functions, and training paradigms. We find that use of policies with spatial memory that are bootstrapped with imitation learning and finally finetuned with coverage rewards derived purely from on-board sensors can be effective at exploring novel environments. We show that our learned exploration policies can explore better than classical approaches based on geometry alone and generic learning-based exploration techniques. Finally, we also show how such task-agnostic exploration can be used for down-stream tasks.

Demo

Without Pose Estimation Error

rgb_map.mp4

With Noisy Pose Estimation

rgb_map_noise.mp4

Note: In all experiments (including the training and testing experiments), depth sensor values are filtered with maximum_depth=3m.

Bibtex

@inproceedings{chen2018learning,
author = "Chen, Tao and Gupta, Saurabh and Gupta, Abhinav",
title = "Learning Exploration Policies for Navigation",
booktitle = "International Conference on Learning Representations",
year = "2019",
url = "https://openreview.net/forum?id=SyMWn05F7"
}