Learning to Follow Directions in Street View

Karl Moritz Hermann, Mateusz Malinowski, Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, Raia Hadsell

London, United Kingdom

Example of task and trajectory (oracle agent)

Example of task and trajectory (oracle agent)

Example of task and trajectory (oracle agent)

AAAI 2020 paper

Navigating and understanding the real world remains a key challenge in machine learning and inspires a great variety of research in areas such as language grounding, planning, navigation and computer vision. We propose an instruction-following task that requires all of the above, and which combines the practicality of simulated environments with the challenges of ambiguous, noisy real world data. StreetNav is built on top of Google Street View and provides visually accurate environments representing real places. Agents are given driving instructions which they must learn to interpret in order to successfully navigate in this environment. Since humans equipped with driving instructions can readily navigate in previously unseen cities, we set a high bar and test our trained agents for similar cognitive capabilities. Although deep reinforcement learning (RL) methods are frequently evaluated only on data that closely follow the training distribution, our dataset extends to multiple cities and has a clean train/test separation. This allows for thorough testing of generalisation ability. This paper presents the StreetNav environment and tasks, models that establish strong baselines, and extensive analysis of the task and the trained agents.

As part of our publication, we are adding the environment and game code for the StreetNav / StreetLearn environment and the StreetLearn dataset augmented with navigation instructions. The environment is based on Google Street View images of New York and Pittsburgh. The StreetLearn dataset is a limited set of Google Street View images approved for use with the StreetLearn project and academic research.