Exploration in RL

Workshop @ ICML 2018 on July 15

Contact: erl-leads@google.com

What is ERL?

  1. ERL is an ICML 2018 workshop on exploration in reinforcement learning. ERL is designed to broadly appeal to diverse groups within the machine learning community. We encourage researchers from related fields such as unsupervised learning, causal inference, generative models, and Bayesian modeling to attend and explore connections between these fields and exploration in RL.
  2. Erl is a small town in North-Western Austria, near the border with Germany. To the best of our knowledge, the town of Erl is in no way affiliated with this workshop.


Videos and Slides from ERL 2018

The following YouTube playlist has all the talks from the workshop:

https://www.youtube.com/playlist?list=PLbSAfmOMweH3YkhlH0d5KaRvFTyhcr30b

Slides for all contributed talks are available here:

https://docs.google.com/presentation/d/1zkqtsM-GywKN9kzX4r0j-C1SUF5I0N0mgsxpfvJyl7s

Call for Papers

Exploration is a key component of reinforcement learning (RL). As RL scales up to address increasingly complex tasks, efficient exploration is increasingly important. Not only is exploration difficult, but it is difficult to even determine whether an agent is doing “good, intelligent exploration.”

The goal of this workshop is to present and discuss exploration in RL and related fields. Invited speakers will share their perspectives on what it means to do “good, intelligent exploration” and researchers will share recent work in spotlight presentations and poster sessions. These perspectives on exploration include, but are not limited to:

  • Exploration in RL/bandits theory
  • Exploration in RL/bandits applications (e.g. education, healthcare, robotics)
  • Quantitative evaluation of exploration
  • Safety and risk awareness in exploration
  • Bayesian perspectives on exploration (e.g. exploration as information gain)
  • Curiosity, intrinsic motivation, intuitive physics, and cognitive neuroscience
  • Exploration as experimental design
  • Connections between causality and exploration
  • Meta-learning for learning to explore
  • Exploration as unsupervised/semi-supervised learning
  • Constrained exploration (e.g. robots with physical constraints)
  • Hierarchical exploration (e.g. with options)

Submissions

We invite all researchers working on problems related to exploration to submit a 4 page paper (not including references or appendix) in PDF format using the ICML 2018 template and style guidelines. The template points to an Overleaf project with an updated style file with an updated footnote that indicates the paper was published at our worksop -- please use this style sheet when submitting the camera-ready versions. You may download the whole template or just the icml2018.sty file to use the updated footnote.

We allow papers that are currently under review to the submitted to the workshop, as we are not publishing workshop proceedings. This means that, for instance, papers that were submitted to NIPS can be submitted to this workshop.

Please upload your anonymized submission to EasyChair. The review process will be double-blind. Accepted papers will be presented as posters or as spotlight talks. We highly encourage authors to release open-source implementations of their ideas, and we will provide links to those implementations on our website.

Important Dates:

  • Paper submission opens: May 1, 12:00PM PST
  • Deadline for paper submission: May 25, 12:00PM PST
  • Review decisions released: June 24, 12:00PM PST
  • Deadline for camera ready: July 2, 12:00PM PST
  • Workshop: July 15

Frequently Asked Questions

  • Can I submit a paper that is currently under review at another venue (e.g., NIPS)? Yes! As a reminder, make sure the submission is anonymized and uses the ICML template.
  • Can I submit a paper that has already been accepted at another venue? No.

Audience

The exploration workshop is designed to broadly appeal to diverse groups within the machine learning community, including attendees of ICML, IJCAI-ECAI, and AAMAS. The sheer volume of work related to exploration in RL published in the last few years (see Recent Papers on Exploration in RL) demonstrates the large interest in this area. Most immediately, the workshop targets researchers studying both the theory and applications of reinforcement learning. More broadly, we encourage researchers from related fields such as unsupervised learning, causal inference, generative models, and Bayesian modeling to attend and explore connections between these fields and exploration in RL.

Sponsors

We're generously aided by both Google AI and Google Cloud for providing travel grants and funding for best paper awards.