Ecological Theory oF RL

Submission Instructions

Tuesday, December 14th, 2021 @ NeurIPS 2021 (Virtual)

08:00 - 17:30 (ET)

SUBMISSION INstructionS


Consider submitting either extended abstracts (4 pages) or full papers (6-9 pages). The papers should be formatted using NeurIPS 2021 Latex template (available here). References and supplementary materials do not count towards page total.


The reviewing process will be double-blind. Authors are responsible for anonymizing their submissions. See NeurIPS 2021 Guidelines (here) for more information.


SUBMISSION PLATFORM

Please submit your paper and supplementary material through CMT3 at https://cmt3.research.microsoft.com/EcoRL2021 before September 17th, 23:59 AOE.

IMPORTANT DATES

  • Submissions Open: Aug 1, 2021 00:00 AOE

  • Submissions Deadline: October 8th, 2021 23:59 AOE

  • Authors Notification: Oct 26, 2021

  • Camera Ready: Dec 1, 2021

WORKSHOP AREAS

This workshop builds connections between different areas of RL centered around the understanding of algorithms and their context. We are interested in questions such as, but not limited to:

  1. How to gauge the complexity of an RL problem.

  2. Which classes of algorithms can tackle which classes of problems.

  3. How to develop practically applicable guidelines for formulating RL tasks that are tractable to solve.

We expect submissions that address these and other related questions through an ecological and data-centric view, pushing forward the limits of our comprehension of the RL problem. In particular, we encourage submissions that investigate the following topics:

Properties and taxonomies of MDPs, tasks or environments and their connection to:

  • Curriculum, continual, and multi-task learning

  • Novelty search, diversity algorithms, and open-endedness.

  • Representation learning.

  • MDPs homomorphism, bisimulation, inductive biases and equivalences.

  • PAC analysis of MDPs.

  • Dynamical systems and control theory.

  • Information-theoretic perspectives on MDPs.

  • Reinforcement Learning benchmarks and their meta-analyses.

  • Real-world applications of RL (Robotics, Recommendation, etc.)

Properties of agents' experiences and their connection to:

  • Offline Reinforcement Learning.

  • Exploration.

  • Curiosity and intrinsic motivation.

  • Skills discovery and hierarchical reinforcement learning.

  • Unsupervised objectives for reinforcement learning.