Conditional Affordance Learning

Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 % in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation.


System Overview

The CAL agent (top) receives the current camera image and a directional command (“straight”,“left”,“right”) from CARLA [29]. The feature extractor converts the image into a feature map. The agent stores the last N feature maps in memory, where N is the length of the input sequence required for the perception stack. This sequence of feature maps, together with the directional commands from the planner, are exploited by the task blocks to predict affordances. Different tasks utilize different temporal receptive fields and temporal dilation factors. The control commands calculated by the controller are sent back to CARLA which updates the environment and provides the next observation and directional command.


Video


Benchmark Results