ICML / IJCAI / AAMAS 2018 Workshop on Planning and Learning (PAL-18)


Planning and learning are both core areas of Artificial Intelligence. The reinforcement learning community has mostly relied on approximate dynamic programming and Monte-Carlo tree search as its workhorses for planning, while the field of planning has developed a diverse set of representational formalisms and scalable algorithms that are currently underexplored in learning approaches.  Further, the planning community could benefit from the tools and algorithms developed by the machine learning community, for instance to automate the generation of planning problem descriptions.

The purpose of this workshop is to encourage discussion and collaboration between the communities of planning and learning. Furthermore, we also expect that agents and general AI researchers are interested in the intersection of planning and learning, in particular those that focus on intelligent decision making. As such, the joint workshop program is an excellent opportunity to gather a large and diverse group of interested researchers.

Organizing Committee

Scott Sanner, University of Toronto
Matthijs Spaan, TU Delft
Timothy Mann, Google DeepMind
Aviv Tamar, UC Berkeley

Invited Speakers

Emma Brunskill, Stanford University
Craig Boutilier, Google Mountain View
Thore Graepel, Google DeepMind
Sergey Levine, UC Berkeley

Schedule (Sunday, July 15, Room C7)

8:30-10:00: Morning Session 1

Invited Talk (8:30-9:15): Emma Brunskill, Stanford University, "Planning to Learn"

Contributed Paper Talks (9:15-10:00) "Safety and Robustness":

- "Safe Reduced Models for Probabilistic Planning", Sandhya Saisubramanian and Shlomo Zilberstein.

- "An Empirical Evaluation of Safe Policy Improvement in Factored Environments", Thiago D. Simão and Matthijs T. J. Spaan.

- "Policy-Conditioned Uncertainty Sets for Robust Markov Decision Processes", Andrea Tirinzoni, Xiangli Chen, Marek Petrik and Brian Ziebart.

10:30-12:45: Morning Session 2

Invited Talk (10:30-11:15): Craig Boutilier, Google Mountain View, "RL and MDPs in Recommender Systems: Modeling and Computational Challenges"

Contributed Paper Talks (11:15-12:15) "Learning and Planning I":

- "Planning to Give Information in Partially Observed Domains with a Learned Weighted Entropy Model", Rohan Chitnis, Leslie Kaelbling and Tomás Lozano-Pérez.

- "Learning to Plan with Portable Symbols", Steven James, Benjamin Rosman and George Konidaris.

- "Learning Plannable Representations with Causal InfoGAN", Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart Russell and Pieter Abbeel.

- "Improving width-based planning with compact policies", Miquel Junyent, Anders Jonsson and Vicenç Gómez.

1st Poster Session (12:15-12:45): All Papers (posters: 61 cm x 91 cm in portrait orientation)

12:45-14:00: Lunch

14:00-15:30: Afternoon Session 1

Invited Talk (14:00-14:45): Sergey Levine, U.C. Berkeley, "Off-Policy Learning with Model-Based and Model-Free RL"

Contributed Paper Talks (14:45-15:30) "Learning and Planning II":

- "Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models", Kurtland Chua, Roberto Calandra, Rowan McAllister and Sergey Levine.

- "Recognizing Plans by Learning Embeddings from Observed Action Distributions", Yantian Zha, Yikang Li, Sriram Gopalakrishnan, Baoxin Li and Subbarao Kambhampati.

- "Extracting Action Sequences from Texts Based on Deep Reinforcement Learning", Feng Wenfeng, Hankz Hankui Zhuo and Subbarao Kambhampati.

16:00-18:00: Afternoon Session 2

Invited Talk (16:00-16:45): Thore Graepel, Google DeepMind, "The Role of Multi-Agent Learning in Artificial Intelligence Research"

Contributed Paper Talks (16:45-17:30) "MCTS methods, Constrained (PO)MDPs":

- "A0C: Alpha Zero in Continuous Action Space", Thomas Moerland, Joost Broekens, Aske Plaat and Catholijn Jonker.

- "Monte-Carlo Tree Search for Constrained MDPs", Jongmin Lee, Geon-Hyeong Kim, Pascal Poupart and Kee-Eung Kim.

- "Column Generation Algorithms for Constrained POMDPs", Erwin Walraven and Matthijs T. J. Spaan.

2nd Poster Session (17:30-18:00): All Papers (posters: 61 cm x 91 cm in portrait orientation)

Accepted Papers

Call for Submissions (closed)

The Planning and Learning workshop solicits work at the intersection of the fields of machine learning and planning.  We also solicit work solely in one area that can influence advances in the other so long as the connections are clearly articulated in the submission.  Submissions are invited for topics on, but not limited to:
  • Multi-agent planning and learning
  • Robust planning in uncertain (learned) models
  • Adaptive Monte Carlo planning
  • Learning search heuristics for planner guidance
  • Reinforcement learning (model-based, Bayesian, deep, etc.)
  • Model representation and learning for planning
  • Theoretical aspects of planning and learning
  • Learning and planning competition(s)
  • Applications of planning and learning

Important Dates

  • Submission deadline: May 23, 2018 (11:59pm Hawaii Time)
  • Notification date: May 31, 2018
  • Camera-ready deadline: Wednesday, June 13, 2018
  • Workshop date: Sunday, July 15, 2018 (full day)

Submission Procedure (closed)

We solicit workshop paper submissions relevant to the above call of the following types:  
  • Long papers -- up to 8 pages + unlimited references / appendices 
  • Short papers -- up to 4 pages + unlimited references / appendices
  • Extended abstracts -- up to 2 pages + unlimited references / appendices 
We will accept papers in any of the IJCAI, ICML, AAMAS, or NIPS formats.  Submissions are not anonymous and should include author information.

Some accepted papers will be accepted as contributed talks.  All other papers will be given a slot in the poster presentation session.  Extended abstracts are intended as brief summaries of already published papers, challenge or position papers, or preliminary work.

Paper submissions and updates should be made through EasyChair: https://easychair.org/conferences/?conf=pal18