Reinforcement Learning for Transportation

‚ÄčNovember 4-7, 2018 ‚Äč

Maui, Hawaii, USA

Sponsored by the IEEE Intelligent Transportation Systems Society

Confirmed Speakers

Prof. Mykel Kochenderfer

Stanford University

Prof. Dan Work

Vanderbilt University

Dr. Guni Sharon

The University of Texas at Austin

Statement of Purpose

Recent years have seen a steady stream of reinforcement learning (RL) highlights, as tremendous progress has been made in control of complex dynamical systems, as RL has emerged as a highly promising framework for control. At the same time, RL has yet to push the boundaries of real-world domains and applications. As transportation is a highly rich, complex dynamical system itself, we as a community have an opportunity to explore and push the theory, methodology, and practice of RL for practical transportation problems. In this workshop, we ask: How can RL help build towards safer, more reliable, smarter, transportation systems? How can the structure inherent in transportation problems help overcome challenges in sample-efficiency, scaling, heterogeneity, data limitations, communication constraints, and other challenges in RL? The goal of this workshop is to bring together researchers and practitioners from transportation, reinforcement learning, and control to address core challenges in intelligent transportation systems.

Call for Papers:

Prediction and modeling

    • Models and algorithms which support the interaction between humans and AI systems
    • Modeling complex human behaviors and preferences (e.g. inverse reinforcement learning)
    • Forecast of demand and mobility patterns in large scale urban transportation systems
    • Predictive modeling of risk and accidents through telematics, modeling, simulation


    • Reliable, safe, and/or interpretable controllers in the presence of complex, multi-agent, heterogeneous traffic dynamics
    • Adaptive algorithms and controllers suitable across networks of varying characteristics
    • Simulator to real-world transfer learning
    • Control and coordination of traffic leveraging V2V and V2X infrastructures


    • Data-efficient methods appropriate for data limitations of real-world settings (e.g. sample-efficient reinforcement learning)
    • Multi-objective optimization (e.g. fuel efficiency, travel time, comfort, safety, etc.)
    • The lack of standard benchmarks for evaluating the performance of reinforcement learning algorithms in intelligent transportation systems

Paper Submission:

  • Deadline: 02 May 2018, 05:00 Pacific Daylight Time
  • Rules of submission
    • All papers should be submitted without author names or affiliations
    • Papers are up to 6 pages excluding references
    • Extended abstracts are up to 4 pages excluding references
    • Please submit both on the ITS website and on OpenReview:


  1. Prof. Alexandre Bayen, Director, Institute for Transportation Studies, UC Berkeley; Dept. of Electrical Engineering and Computer Science; Dept. of Civil and Environmental Engineering; Liao-Cho Professor of Engineering, UC Berkeley
  2. Cathy Wu, Dept. of Electrical Engineering and Computer Science, UC Berkeley
  3. Eugene Vinitsky, Dept. of Mechanical Engineering, UC Berkeley


If you have questions, the contact email is