Exploration in RL

Workshop @ ICML 2019 on June 14-15 (Time TDB)

Contact: erl-leads@google.com

What is ERL?

  1. ERL is an ICML 2019 workshop on exploration in reinforcement learning. ERL is designed to broadly appeal to diverse groups within the machine learning community. We encourage researchers from related fields such as unsupervised learning, causal inference, generative models, and Bayesian modeling to attend and explore connections between these fields and exploration in RL.
  2. Erl is a small town in North-Western Austria, near the border with Germany. To the best of our knowledge, the town of Erl is in no way affiliated with this workshop.

Call for Papers

Exploration is a key component of reinforcement learning (RL). While RL has begun to solve relatively simple tasks, current algorithms cannot complete complex tasks. Our existing algorithms often endlessly dither, failing to meaningfully explore their environments in search of high-reward states. If we hope to have agents autonomously learn increasingly complex tasks, these machines must be equipped with machinery for efficient exploration.

The goal of this workshop is to present and discuss exploration in RL, including deep RL, evolutionary algorithms, real-world applications, and developmental robotics. Invited speakers will share their perspectives on efficient exploration, and researchers will share recent work in spotlight presentations and poster sessions. These perspectives include (but not limited to):

  • Bayesian approaches to exploration (e.g. Thompson sampling)
  • Exploration in real-world RL problems (e.g. education, healthcare, robotics)
  • Quantitative evaluation of exploration
  • Safety and risk awareness in exploration
  • Curiosity, intrinsic motivation, unsupervised exploration
  • Hierarchical exploration (e.g. with options, skills or goals)
  • Novelty search, quality diversity algorithms, and open-endedness

Submissions

We invite all researchers working on problems related to exploration to submit a 4 page paper (not including references or appendix) in PDF format using the following template and style guidelines. The template points to an Overleaf project with an updated style file with an updated footnote that indicates the paper was published at our workshop -- please use this style sheet when submitting the camera-ready versions. You may download the whole template or just the icml2019.sty file to use the updated footnote.

We allow papers that are currently under review to the submitted to the workshop, as we are not publishing workshop proceedings. This means that, for instance, papers that were submitted to NeurIPS can be submitted to this workshop.

Please upload your anonymized submission to CMT. The review process will be double-blind. Accepted papers will be presented as posters or as spotlight talks. We highly encourage authors to release open-source implementations of their ideas, and we will provide links to those implementations on our website.

Important Dates:

  • Paper submission opens: April, 12:00PM PST
  • Deadline for paper submission: April 26, 12:00PM PST
  • Review decisions released: May 23, 12:00PM PST
  • Deadline for camera ready: June 3, 12:00PM PST
  • Workshop: June 15, 2019

Frequently Asked Questions

  • Can I submit a paper that is currently under review at another venue (e.g., NeurIPS)? Yes! As a reminder, make sure the submission is anonymized and uses the ICML template.
  • Can I submit a paper that has already been accepted at another venue? No.

Audience

The exploration workshop is designed to broadly appeal to diverse groups within the machine learning community. The sheer volume of work related to exploration in RL published in the last few years (see Recent Papers) demonstrates the large interest in this area. Most immediately, the workshop targets researchers studying algorithms for efficient exploration and the impact of those algorithms on real-world applications. This includes, but is not limited to research on deep RL, developmental robotics, evolutionary algorithms. More broadly, we encourage researchers from related fields such as unsupervised learning, causal inference, generative models, and Bayesian modeling to attend and explore connections between these fields and exploration in RL.

Archive

ERL 2019 is the second iteration of the inaugural workshop in Exploration in RL from ICML 2018. We will be uploading all videos and slides from ERL 2019 and post them on this website after the workshop! The archive of videos and slides from previous years can be found below:

The following YouTube playlist has all the talks from the workshop:

https://www.youtube.com/playlist?list=PLbSAfmOMweH3YkhlH0d5KaRvFTyhcr30b

Slides for all contributed talks are available here:

https://docs.google.com/presentation/d/1zkqtsM-GywKN9kzX4r0j-C1SUF5I0N0mgsxpfvJyl7s