Workshop on AI for Autonomous Driving

July 17, 2020 (noon - 10 pm UTC)

Workshop at ICML 2020 (https://icml.cc/virtual/2020/workshop/5733)

Abstract

According to WHO statistics, every year, there are more than a million traffic fatalities world-wide. Self-driving cars and advanced safety features present one of today’s greatest challenges and opportunities for Artificial Intelligence (AI). Despite billions of dollars of investments and encouraging progress under certain operational constraints, there are no driverless cars on public roads today without human safety drivers, and the aforementioned fatalities are still a tragic reality. Autonomous Driving research spans a wide spectrum, from modular architectures -- composed of hardcoded or independently learned sub-systems -- to end-to-end deep networks with a single model from sensors to controls. In any system, Machine Learning is a key component. However, there are formidable learning challenges due to safety constraints, the need for large-scale manual labeling, and the complex high dimensional structure of driving data, whether inputs (from cameras, HD maps, inertial measurement units, wheel encoders, LiDAR, radar, etc.) or predictions (e.g., world state representations, behavior models, trajectory forecasts, plans, controls). The goal of this workshop is to explore the frontier of learning approaches for safe, robust, and efficient Autonomous Driving (AD) at scale. The workshop will span both theoretical frameworks and practical issues.


We seek to address the following questions

  • How to make perception real-time, accurate, robust, and uncertainty-aware for safe AD?

  • How to reduce uncertainty about the future by making probabilistic predictions (e.g., trajectory forecasting, intent estimation)?

  • How to employ self-supervised learning, meta learning, transfer learning (e.g., few-shots, sim2real), domain adaptation, and other techniques that reduce the need for manual labeling?

  • How to learn long term driving strategies (driving policies) with deep reinforcement learning?

  • To what extent should we strive for modular systems vs end-to-end learned systems?

  • How can reinforcement and imitation learning be effectively used for AD?

  • How can we create accurate models of human driver behavior?

  • How can we determine the error probability of a learned system and guarantee its safety, i.e., when can we trust such systems?

  • How to achieve near-zero fatality? How can we model uncertainty propagation in deep networks?

  • Can machines learn to drive better simply by acquiring more data? If so, how much?

More broadly, we note that in the US there is only about one fatal crash for every 100 million miles driven and only about one serious crash for every million miles driven. We ask, can autonomous systems reach this level of safety without attaining nearly human-level intelligence? Can a machine deal with the enormous range of behaviors and conditions required for safe autonomy?

The workshop will offer a timely collection of information to benefit the researchers and practitioners working in the broad research fields of AI, computer vision, machine learning, robotics, and autonomous driving. All the aforementioned issues are well covered by the Topics of Interest in ICML 2020.

Program highlight

  • 9 invited talks + Live Q&A

  • 2 Panel discussions (15:00 - 15:30 UTC & 20:40 - 21:10 UTC)


  • 14 accepted paper (papers + slides on our website & 3-minute videos on the ICML website)

  • 2 Live Q&A sessions (15:35 - 16:40 UTC & 21:10 - 22:00 UTC)


  • Best paper award (sponsored by NVIDIA)

Speakers

University of Toronto, Uber ATG

Argo AI, Georgia Tech

University of Oxford

UC Berkeley

Sponsors