Deep Reinforcement Learning Workshop

NeurIPS 2019

About

In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multiagent interaction. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.

For previous editions, please visit NeurIPS 2018, 2017, 2016, 2015.

Invited Speakers

Organizers

Schedule

Morning (08:45 - 12:30)

  • 08:45 - 09:00 Welcome Comments
  • 09:00 - 09:30 Oriol Vinyals - Grandmaster Level in StarCraft II using Multi-Agent Reinforcement Learning
  • 09:30 - 10:00 contributed talks
      • 09:30 - 09:40 Playing Dota 2 with Large Scale Deep Reinforcement Learning - OpenAI, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyłsaw Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang
      • 09:40 - 09:50 Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks - Yijie Guo, Jongwook Choi, Marcin Moczulski, Samy Bengio, Mohammad Norouzi, Honglak Lee
      • 09:50 - 10:00 Efficient Visual Control by Latent Imagination - Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi
  • 10:00 - 10:30 Shimon Whiteson - Bayes-Adaptive Deep Reinforcement Learning via Meta-Learning
  • 10:30 - 11:00 coffee break
  • 11:00 - 11:30 Emo Todorov - Optico: A Framework for Model-Based Optimization with MuJoCo Physics
  • 11:30 - 12:00 contributed talks
      • 11:30 - 11:40 Adaptive Online Planning for Lifelong Reinforcement Learning - Kevin Lu, Igor Mordatch, Pieter Abbeel
      • 11:40 - 11:50 Interactive Fiction Games: A Colossal Adventure - Matthew Hausknecht, Prithviraj V Ammanabrolu, Marc-Alexandre Côté, Xingdi Yuan
      • 11:50 - 12:00 Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning? - Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, Sergey Levine
  • 12:00 - 12:30 Late-Breaking Papers (Talks)
    • 12:00 - 12:10 Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model - Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver
    • 12:10 - 12:20 Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning? - Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang
    • 12:20 - 12:30 Solving Rubik's Cube with a Robot Hand - OpenAI, Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, Jonas Schneider, Nikolas Tezak, Jerry Tworek, Peter Welinder, Lilian Weng, Qiming Yuan, Wojciech Zaremba, Lei Zhang

One-hour lunch break from 12:30 - 13:30.

Afternoon (13:30 - 18:00)

  • 13:30 - 14:00 Emma Brunskill - RL Challenges Inspired from People-Focused Applications
  • 14:00 - 14:30 contributed talks
      • 14:00 - 14:10 Striving for Simplicity in Off-Policy Deep Reinforcement Learning - Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi
      • 14:10 - 14:20 Adversarial Policies: Attacking Deep Reinforcement Learning - Adam R Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, Stuart Russell
      • 14:20 - 14:30 Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning - Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee
  • 14:30 - 16:00 Poster Session + coffee
  • 16:00 - 17:00 NeurIPS RL Competitions Results Presentations
  • 17:00 - 17:30 Michael Littman - Assessing the Robustness of Deep RL Algorithms
  • 17:30 - 18:00 Panel Discussion
      • Panelists: Raia Hadsell, Anna Harutyunyan, Michael Littman, Emo Todorov, Oriol Vinyals
      • Moderator: Pieter Abbeel

Date: Sat Dec 14, 2019

Time: 8:45am - 6:00pm

Room: West Exhibition Hall C

Submit questions for panel here

Accepted Papers

all-papers-deep-rl-workshop-2019.zip


Late-Breaking Papers (Poster)

  • Grandmaster Level in StarCraft II using Multi-Agent Reinforcement Learning; Oriol Vinyals (DeepMind), Igor Babuschkin (DeepMind), Wojciech M. Czarnecki (DeepMind), Michaël Mathieu (DeepMind), Andrew Dudzik (DeepMind), Junyoung Chung (DeepMind), David H. Choi (DeepMind), Richard Powell (DeepMind), Timo Ewalds (DeepMind), Petko Georgiev (DeepMind), Junhyuk Oh (DeepMind), Dan Horgan (DeepMind), Manuel Kroiss (DeepMind), Ivo Danihelka (DeepMind), Aja Huang (DeepMind), Laurent Sifre (DeepMind), Trevor Cai (DeepMind), John P. Agapiou (DeepMind), Max Jaderberg (DeepMind), Alexander S. Vezhnevets (DeepMind), Rémi Leblond (DeepMind), Tobias Pohlen (DeepMind), Valentin Dalibard (DeepMind), David Budden (DeepMind), Yury Sulsky (DeepMind), James Molloy (DeepMind), Tom L. Paine (DeepMind), Caglar Gulcehre (DeepMind), Ziyu Wang (DeepMind), Tobias Pfaff (DeepMind), Yuhuai Wu (DeepMind), Roman Ring (DeepMind), Dani Yogatama (DeepMind), Dario Wünsch (DeepMind), Katrina McKinney (DeepMind), Oliver Smith (DeepMind), Tom Schaul (DeepMind), Timothy Lillicrap (DeepMind), Koray Kavukcuoglu (DeepMind), Demis Hassabis (DeepMind), Chris Apps (DeepMind), David Silver (DeepMind)
  • Positive-Unlabeled Reward Learning; Danfei Xu (Stanford), Misha Denil (Stanford)
  • Learning to Scaffold the Development of Robotic Manipulation Skills; Lin Shao (Stanford), Toki Migimatsu (Stanford), Jeannette Bohg (Stanford)
  • Improving Sample Efficiency in Model-Free Reinforcement Learning from Images; Denis Yarats (New York University, FAIR), Amy Zhang (McGill, MILA, FAIR), Ilya Kostrikov (New York University), Brandon Amos (FAIR), Joelle Pineau (McGill, MILA, FAIR), Rob Fergus (New York University, FAIR)
  • Off-Policy Actor-Critic with Shared Experience Replay; Simon Schmitt (DeepMind), Matteo Hessel (DeepMind), Karen Simonyan (DeepMind)

Competition ""Learn to Move: Walk Around" Awards Papers (Poster)

Information about Posters

  • Posters are taped to the wall with the special tabs provided at the venue.
  • Please make your posters 36W x 48H inches or 90 x 122 cm.
  • Posters should be on light weight paper, not laminated.

Program Committee

We would like to thank the following people for their effort in making this year's edition of the Deep RL Workshop a success.

  • Pulkit Agrawal
  • Maruan Al Shedivat
  • Marcin Andrychowicz
  • Glen Berseth
  • Diana Borsa
  • Noam Brown
  • Roberto Calandra
  • Devendra Singh Chaplot
  • Richard Chen
  • Ignasi Clavera
  • Coline Devin
  • Rocky Duan
  • Harri Edwards
  • Jakob Foerster
  • Justin Fu
  • Yasuhiro Fujita
  • Shixiang Gu
  • Arthur Guez
  • Xiaoxiao Guo
  • Abhishek Gupta
  • David Ha
  • Tuomas Haarnoja
  • Danijar Hafner
  • Jean Harb
  • Anna Harutyunyan
  • Matt Hausknecht
  • Karol Hausman
  • Rein Houthooft
  • Sandy Huang
  • Max Jaderberg
  • Eric Jang
  • Gregory Kahn
  • Tejas Kulkarni
  • Alex Lee
  • Lisa Lee
  • RyanLowe
  • Kendall Lowrey
  • Rowan McAllister
  • Vlad Mnih
  • Nikhil Mishra
  • Igor Mordatch
  • Ofir Nachum
  • Ashvin Nair
  • Karthik Narasimhan
  • Junhyuk Oh
  • Emilio Parisotto
  • Deepak Pathak
  • Xue Bin Peng
  • Vitchy Pong
  • Lerrel Pinto
  • Janarthanan Rajendran
  • Aravind Rajeswaran
  • Sid Reddy
  • Tim Salimans
  • Pierre Sermanet
  • Rohin Shah
  • Max Smith
  • Bradly Stadie
  • Aviv Tamar
  • Yuandong Tian
  • Josh Tobin
  • George Tucker
  • Sasha Vezhnevets
  • Jane Wang
  • Tony Wu
  • Marvin Zhang
  • Zeyu Zheng
  • Shangtong Zhang
  • Zhongwen Xu
  • Risto Vuorio
  • Qi Zhang
  • Jongwook Choi
  • Huazhe Xu
  • Yi Wu
  • Markus Wulfmeier

FAQ

Q: Is it OK to submit a paper that will also be submitted to ICLR 2020?

A: Yes.

Q: Is it OK to submit a paper that was accepted into CoRL 2019?

A: Yes.

Q: Is it OK to submit a paper that was rejected from the NeurIPS main conference?

A: Yes.

Q: Will there be official archival proceedings?

A: No.

Q: Should submitted papers be anonymized?

A: Yes! If accepted, we will ask for a de-anonymized version to link on the website, like in previous years.

Q: Wait, what time *precisely* is the deadline?

A: Sept 9, 11:59 PM PST.

Q: What are dimensions for the poster?

A: 36W x 48H inches or 90 x 122 cm. Should be with light weight paper.