Call for Papers

Workshop on Reinforcement Learning under Partial Observability, NeurIPS 2018

Saturday December 08, 2018

Palais des Congrès de Montréal, Montréal CANADA


Reinforcement learning (RL) has succeeded in many challenging tasks such as Atari, Go, and Chess and even in high dimensional continuous domains such as robotics. Most impressive successes are in tasks where the agent observes the task features fully. However, in real world problems, the agent usually can only rely on partial observations. In real time games the agent makes only local observations; in robotics the agent has to cope with noisy sensors, occlusions, and unknown dynamics. Even more fundamentally, any agent without a full a priori world model or without full access to the system state, has to make decisions based on partial knowledge about the environment and its dynamics.

In this workshop, we ask among others the following questions.

  • For decision-making under partial observability is reinforcement the most suitable/effective approach to learning?
  • How can we extend deep RL methods to robustly solve partially observable problems?
  • Can we learn concise abstractions of history that are sufficient for high-quality decision-making?
  • There have been several successes in decision making under partial observability despite the inherent challenges. Can we characterize problems where computing good policies is feasible?
  • Since decision making is hard under partial observability do we want to use more complex models and solve them approximately or use (inaccurate) simple models and solve them exactly? Or not use models at all?
  • How can we use control theory together with reinforcement learning to advance decision making under partial observability?
  • Can we combine the strengths of model-based and model-free methods under partial observability?
  • Can recent method improvements in general RL already tackle some partially observable applications which were not previously possible?
  • How do we scale up reinforcement learning in multi-agent systems with partial observability?
  • Do hierarchical models / temporal abstraction improve RL efficiency under partial observability?

Website: https://sites.google.com/site/rlponips2018

Invited Speakers

Joelle Pineau (McGill University / Facebook), Pieter Abbeel (UC Berkeley), Leslie Kaelbling (MIT), Anca Dragan (UC Berkeley), David Silver (Google DeepMind / University College London), Peter Stone (University of Texas at Austin), Jilles Dibangoye (INSA Lyon)

Important Dates

  • Submission deadline: Friday, October 26, 2018, 23:00 UTC
  • Author notification: Monday, November 5, 2018
  • Final paper posted online: Monday, December 3, 2018
  • Workshop: Saturday, December 8, 2018

Submission

  • Extended abstracts are solicited on reinforcement learning under partial observability and related fields, both on theory and practice
  • We encourage both novel research and preliminary results
  • Submissions should be 2 pages long, not counting references and appendices, in the NIPS style
  • Non-anonymized submissions should be sent through Easychair at https://easychair.org/conferences/?conf=rlpo2018

Accepted Papers

  • All accepted papers will be presented as spotlights and during two poster sessions. Authors of top accepted papers will be invited to give a short contributed talk
  • Accepted papers will be made publicly available as non-archival reports, allowing future and concurrent submissions to archival conferences and journals
  • One of the authors for an accepted paper will receive, if needed, a registration reservation that will allow the author to register
  • Authors are requested to specify their NIPS workshop registration email address on the accepted paper

Organizers

Joni Pajarinen (TU Darmstadt), Christopher Amato (Northeastern University), Pascal Poupart (University of Waterloo), David Hsu (National University of Singapore)

Contact Information

E-mail: rl-under-partial-observability-nips-2018@googlegroups.com