Significant progress has been made in reinforcement learning, enabling agents to accomplish complex tasks such as Atari games, robotic manipulation, simulated locomotion, and Go. These successes have stemmed from the core reinforcement learning formulation of learning a single policy or value function from scratch. However, reinforcement learning has proven challenging to scale to many practical real world problems due to problems in learning efficiency and objective specification, among many others. Recently, there has been emerging interest and research in leveraging structure and information across multiple reinforcement learning tasks to more efficiently and effectively learn complex behaviors. This includes:
Multi-task and lifelong reinforcement learning has the potential to alter the paradigm of traditional reinforcement learning, to provide more practical and diverse sources of supervision, while helping overcome many challenges associated with reinforcement learning, such as exploration, sample efficiency and credit assignment. However, the field of multi-task and lifelong reinforcement learning is still young, with many more developments needed in terms of problem formulation, algorithmic and theoretical advances as well as better benchmarking and evaluation.
The focus of this workshop will be on both the algorithmic and theoretical foundations of multi-task and lifelong reinforcement learning as well as the practical challenges associated with building multi-tasking agents and lifelong learning benchmarks. Our goal is to bring together researchers that study different problem domains (such as games, robotics, language, and so forth), different optimization approaches (deep learning, evolutionary algorithms, model-based control, etc.), and different formalisms (as mentioned above) to discuss the frontiers, open problems and meaningful next steps in multi-task and lifelong reinforcement learning.
Submission deadline: May 3, 2019 (AOE) [extended from April 28, 2019]
Notifications: May 20, 2019 (AOE) [extended from May 10, 2019]
**Late-breaking submission deadline**: Friday, May 31, 2019 (AOE)
Camera Ready: June 10, 2019 (AOE)
Workshop: June 15, 2019
UPDATE: We welcome late-breaking submissions, with the formatting guidelines below. Accepted late-breaking works will be presented as posters. These submissions should be made by emailing the submission pdf to the following email address by Friday May 31, AOE: mtlrl@googlegroups.com
The submitted work should be an extended abstract of between 4-8 pages (including references). The submission should be in pdf format and should follow the style guidelines for ICML 2019 (found here). The review process is double-blind and the work should be submitted by Apr 28th, 2019 (Anywhere on Earth) at the latest. The submissions should *not* have been previously published nor have appeared in the ICML main conference . Work currently under submission to another conference is welcome. There will be no formal publication of workshop proceedings. However, the accepted papers will be made available online in the workshop website.
Full (non late-breaking) submissions must be made through Easy chair: https://easychair.org/conferences/?conf=mtlrl2019
Below are example topics that we welcome submissions from:
Subject to change
08:45 – 09:00 Opening remarks
09:00 – 09:25 Invited talk #1: Sergey Levine - Unsupervised Reinforcement Learning and Meta-Learning
09:25 – 09:50 Spotlights (all ~25 papers that don’t have a contributed talk)
09:50 - 10:15 Invited talk #2: Peter Stone - Learning Curricula for Transfer Learning in RL
10:15 – 10:30 Contributed Talks (7 min each + 1 min for questions & transition)
10:30 – 11:00 Poster session and coffee break
-----------------
11:00 – 11:25 Invited talk #3: Jacob Andreas - Linguistic scaffolds for policy learning
11:25 – 11:50 Invited talk #4: Karol Hausman - Skill Representation and Supervision in Multi-Task Reinforcement Learning
11:50 – 12:20 Contributed talks (7 min each + 1 min for questions & transition)
12:20 – 02:00 Poster session and lunch break
-----------------
02:00 – 02:25 Invited talk #5: Martha White - Learning Representations for Continual Learning
02:25 – 02:50 Invited talk #6: Natalia Diaz-Rodriguez - Continual Learning and Robotics: an overview
02:50 – 03:30 Afternoon coffee break and poster session
-----------------
03:30 – 03:55 Invited talk #7: Jeff Clune Towards Solving Catastrophic Forgetting with Neuromodulation & Learning Curricula by Generating Environments
03:55 – 04:15 Contributed talks (7 min each + 1 min for questions & transition)
04:15 – 04:40 Invited talk #8: Nicolas Heess - Talk TBA
04:40 – 05:05 Invited talk #9: Benjamin Rosman - Exploiting Structure For Accelerating Reinforcement Learning
05:05 – 06:00 Panel Discussion
We thank our sponsors for making this workshop possible.