2019 ICML Workshop on AI for Autonomous Driving

June 15, 2019

Conference room 101

Home

A diverse set of methods have been devised to develop autonomous driving platforms. They range from modular systems, systems that perform manual decomposition of the problem, systems where the components are optimized independently, and a large number of rules are programmed manually, to end-to-end deep-learning frameworks. Today’s systems rely on a subset of the following: camera images, HD maps, inertial measurement units, wheel encoders, and active 3D sensors (LIDAR, radar). There is a general agreement that much of the self-driving software stack will continue to incorporate some form of machine learning in any of the above mentioned systems in the future.

Self-driving cars present one of today’s greatest challenges and opportunities for Artificial Intelligence (AI). Despite substantial investments, existing methods for building autonomous vehicles have not yet succeeded, i.e., there are no driverless cars on public roads today without human safety drivers. Nevertheless, a few groups have started working on extending the idea of learned tasks to larger functions of autonomous driving. Initial results on learned road following are very promising.

The goal of this workshop is to explore ways to create a framework that is capable of learning autonomous driving capabilities beyond road following, towards fully driverless cars. The workshop will consider the current state of learning applied to autonomous vehicles and will explore how learning may be used in future systems. The workshop will span both theoretical frameworks and practical issues especially in the area of deep learning.

The workshop will include invited speakers, panels, and poster presentations of accepted papers. We invite papers in the form of short abstracts and full papers to address the core challenges mentioned above and below. We encourage researchers and practitioners on self-driving cars, transportation systems and ride-sharing platforms to participate.

We seek to address the following questions:

Morning Session:

  • How to do real-time perception, and prediction of traffic scenes?
  • How to make perception accurate and robust to accomplish safe autonomous driving?
  • How to reliably track cars, pedestrians, and cyclists? How to make accurate and efficient pedestrian detection, pedestrian intent detection?
  • How to employ unsupervised learning, few-shot learning, transfer learning leveraging simulators, and other techniques that reduce the human labeling labor?
  • How to learn long term driving strategies (driving policies) with deep reinforcement learning?

Afternoon Session:

  • To what extent should we strive for modular systems vs end-to-end learned systems?
  • How can reinforcement and imitation learning be used to the best advantage in autonomous driving?
  • How can we create accurate models of human driver behavior?
  • How can we determine the error probability of a learned system and guarantee its safety, i.e., when can we trust such systems? How to achieve near-zero fatality? How can we understand the uncertainty propagation in deep networks?
  • Can machines learn to drive as well as human simply by acquiring more data? If so, how much data is required?

More broadly, we note that in the US there is only about one fatal crash for every 100 million miles driven and only about one serious crash for every million miles driven. We ask, can autonomous systems reach this level of safety without attaining nearly human-level intelligence (“General AI”)? Can a machine deal with the enormous range of behaviors and conditions required for safe autonomy?

Invited Speakers:

German Ros

Intel Labs

Dorsa Sadigh

Stanford

Fisher Yu

UC Berkeley

Sponsors: