Deep Reinforcement Learning Workshop
NeurIPS 2019
About
In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multiagent interaction. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.
For previous editions, please visit NeurIPS 2018, 2017, 2016, 2015.
Invited Speakers
- Emma Brunskill (Stanford)
- Michael Littman (Brown University)
- Emo Todorov (University of Washington)
- Oriol Vinyals (DeepMind)
- Shimon Whiteson (University of Oxford)
Organizers
Schedule
Morning (08:45 - 12:30)
- 08:45 - 09:00 Welcome Comments
- 09:00 - 09:30 Oriol Vinyals - Grandmaster Level in StarCraft II using Multi-Agent Reinforcement Learning
- 09:30 - 10:00 contributed talks
- 09:30 - 09:40 Playing Dota 2 with Large Scale Deep Reinforcement Learning - OpenAI, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyłsaw Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang
- 09:40 - 09:50 Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks - Yijie Guo, Jongwook Choi, Marcin Moczulski, Samy Bengio, Mohammad Norouzi, Honglak Lee
- 09:50 - 10:00 Efficient Visual Control by Latent Imagination - Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi
- 10:00 - 10:30 Shimon Whiteson - Bayes-Adaptive Deep Reinforcement Learning via Meta-Learning
- 10:30 - 11:00 coffee break
- 11:00 - 11:30 Emo Todorov - Optico: A Framework for Model-Based Optimization with MuJoCo Physics
- 11:30 - 12:00 contributed talks
- 11:30 - 11:40 Adaptive Online Planning for Lifelong Reinforcement Learning - Kevin Lu, Igor Mordatch, Pieter Abbeel
- 11:40 - 11:50 Interactive Fiction Games: A Colossal Adventure - Matthew Hausknecht, Prithviraj V Ammanabrolu, Marc-Alexandre Côté, Xingdi Yuan
- 11:50 - 12:00 Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning? - Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, Sergey Levine
- 12:00 - 12:30 Late-Breaking Papers (Talks)
- 12:00 - 12:10 Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model - Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver
- 12:10 - 12:20 Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning? - Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang
- 12:20 - 12:30 Solving Rubik's Cube with a Robot Hand - OpenAI, Ilge Akkaya, Marcin Andrychowicz, Maciek Chociej, Mateusz Litwin, Bob McGrew, Arthur Petron, Alex Paino, Matthias Plappert, Glenn Powell, Raphael Ribas, Jonas Schneider, Nikolas Tezak, Jerry Tworek, Peter Welinder, Lilian Weng, Qiming Yuan, Wojciech Zaremba, Lei Zhang
One-hour lunch break from 12:30 - 13:30.
Afternoon (13:30 - 18:00)
- 13:30 - 14:00 Emma Brunskill - RL Challenges Inspired from People-Focused Applications
- 14:00 - 14:30 contributed talks
- 14:00 - 14:10 Striving for Simplicity in Off-Policy Deep Reinforcement Learning - Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi
- 14:10 - 14:20 Adversarial Policies: Attacking Deep Reinforcement Learning - Adam R Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, Stuart Russell
- 14:20 - 14:30 Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning - Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee
- 14:30 - 16:00 Poster Session + coffee
- 16:00 - 17:00 NeurIPS RL Competitions Results Presentations
- 16:00 - 16:15 Learn to Move: Walk Around - Seungmoon Song, Fan Wang
- 16:15 - 16:30 Animal Olympics - Matthew Crosby
- 16:30 - 16:45 Robot open-Ended Autonomous Learning (REAL) - Emilio Cartoni
- 16:45 - 17:00 MineRL - William Guss
- 17:00 - 17:30 Michael Littman - Assessing the Robustness of Deep RL Algorithms
- 17:30 - 18:00 Panel Discussion
- Panelists: Raia Hadsell, Anna Harutyunyan, Michael Littman, Emo Todorov, Oriol Vinyals
- Moderator: Pieter Abbeel
Date: Sat Dec 14, 2019
Time: 8:45am - 6:00pm
Room: West Exhibition Hall C
Submit questions for panel here
Accepted Papers
all-papers-deep-rl-workshop-2019.zip
- Ecological Reinforcement Learning; John Co-Reyes (UC Berkeley)*; Suvansh Sanjeev (UC Berkeley); Glen Berseth (University of California Berkeley); Abhishek Gupta (UC Berkeley); Sergey Levine (UC Berkeley).
- Learning Efficient Representation for Intrinsic Motivation; Ruihan Zhao (University of California, Berkeley)*; Stas Tiomkin (BAIR, UC Berkeley); Pieter Abbeel (UC Berkeley) [external pdf link].
- Towards Characterizing Divergence in Deep Q-Learning; Joshua Achiam (OpenAI)*; Ethan Knight (OpenAI); Pieter Abbeel (UC Berkeley).
- Making Efficient Use of Demonstrations to Solve Hard Exploration Problems; Thomas Paine (DeepMind)*; Caglar Gulcehre (DeepMind); Bobak Shahriari (Deepmind); Misha Denil (DeepMind); Matt Hoffman (-); Hubert Soyer (); Richard Tanburn (Deepmind); Steven Kapturowski (Deepmind); Neil Rabinowitz (DeepMind); Duncan Williams (Deepmind); Gabriel Barth-Maron (DeepMind); Ziyu Wang (-); Nando de Freitas (DeepMind).
- Offline Reinforcement Learning via Trajectory Synthesis; Wei-Yang Qu (Nanjing University); Yang Yu (Nanjing University); Qingyang Li (Didi Research America)*; Zhiwei Qin (Didi Research America); Mengyue Yang (AI Labs, Didi Chuxing); Yiping Meng (Didi Chuxing); Jieping Ye (Didi Chuxing) [external pdf link].
- Intelligent Coordination among Multiple Traffic Intersections Using Multi-Agent Reinforcement Learning; Ujwal Tewari (Siemens )*; Vishal Bidawatka (The International Institute of Information Technology Hyderabad); Varsha Raveendran (Siemens Technology and Services); Vinay Sudhakaran (Siemens) [external pdf link].
- Prioritized Sequence Experience Replay; Marc Brittain (Iowa State University)*; Joshua Bertram (Iowa State University); Xuxi Yang (Iowa State University); Peng Wei (Iowa State University) [external pdf link].
- On Learning Symmetric Locomotion; Farzad Abdolhosseini (University of British Columbia); Hung Yu Ling (University of British Columbia)*; Zhaoming Xie (University of British Columbia); Xue Bin Peng (UC Berkeley); Michiel van de Panne (University of British Columbia).
- rlpyt: A Research Code Base for Deep Reinforcement Learning in PyTorch; Adam Stooke (UC Berkeley)*; Pieter Abbeel (UC Berkeley).
- IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data; Ajay Mandlekar (Stanford University)*; Animesh Garg (University of Toronto, Nvidia); Fabio Ramos (NVIDIA, The University of Sydney) [external pdf link].
- Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies; Sungryull Sohn (University of Michigan)*; Hyunjae Woo (university of michigan); Jongwook Choi (University of Michigan); Honglak Lee (University of Michingan Ann Arbor, USA).
- Neural Policy Gradient Methods: Global Optimality and Rates of Convergence; Lingxiao Wang (Northwestern University)*; Qi Cai (Northwestern University); Zhuoran Yang (Princeton.edu); Zhaoran Wang (Northwestern U).
- Meta-learning curiosity algorithms; Ferran Alet (MIT)*; Martin Schneider (MIT); Tomas Lozano-Perez (MIT); Leslie Kaelbling (MIT).
- ZPD Teaching Strategies for Deep Reinforcement Learning from Demonstrations; Daniel Seita (University of California, Berkeley)*; David Chan (University of California, Berkeley); Roshan Rao (UC Berkeley); Chen Tang (UC Berkeley); Mandi Zhao (UC Berkeley); John Canny (UC Berkeley) [external pdf link].
- Temporal-difference learning for nonlinear value function approximation in the lazy training regime; Andrea Agazzi (Duke University)*; Jianfeng Lu (Duke University).
- Improving Policies via Search in Cooperative Partially Observable Games; Adam Lerer (Facebook AI Research); Hengyuan Hu (Facebook); Jakob Foerster (Facebook AI Research); Noam Brown (Facebook AI Research)* [external pdf link].
- Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning; Kimin Lee (KAIST)*; Kibok Lee (University of Michigan); Jinwoo Shin (KAIST); Honglak Lee (University of Michingan Ann Arbor, USA).
- Recurrent neural-linear posterior sampling for non-stationary bandits; Paulo Rauber (IDSIA)*; Aditya Ramesh (USI); Jürgen Schmidhuber (IDSIA - Lugano).
- Multiplayer AlphaZero; Nicholas Petosa (Georgia Institute of Technology)*; Tucker Balch (Ga Tech) [external pdf link].
- Sparse Skill Coding: Learning Behavioral Hierarchies with Efficient Coding; Sophia Sanborn (UC Berkeley)*; Michael Chang (University of California, Berkeley); Sergey Levine (UC Berkeley); Thomas Griffiths (Princeton University).
- DoorGym: A Scalable Door Opening Environment and Baseline Agent; Yusuke Urakami (Panasonic Beta)*; Alec Hodgkinson (Panasonic Beta); Casey Carlin (Panasonic Beta); Randall Leu (Panasonic Beta); Luca Rigazio (Panasonic); Pieter Abbeel (UC Berkeley).
- CSG Tree LSTM: Parsing CSG Image with Entropy Regulated REINFORCE; Chenghui Zhou (Carnegie Mellon University)*; Chun-Liang Li (Carnegie Mellon University).
- Biologically inspired architectures for sample-efficient deep reinforcement learning; Pierre Richemond (Imperial College)*; Arinbjorn Kolbeinsson (Imperial College); Yike Guo (Imperial College London).
- Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems; Jeff Druce (Charles River Analytics)*; James Tittle (Charles River Analytics); Michael Harradon (Charles River Analytics).
- Spatiotemporally Constrained Action Space Attacks on Deep Reinforcement Learning Agents; Xian Yeow Lee (Iowa State University)*; Sambit Ghadai (Iowa State University); Kai Liang Tan (Iowa State University); Chinmay Hegde (New York University); Soumik Sarkar (Iowa State University) [external pdf link].
- Decentralized Multi-Agent Actor-Critic with Generative Inference; Kevin Corder (University of Delaware)*; Manuel Vindiola (Nil); Keith Decker (University of Delaware).
- ChainerRL: A Deep Reinforcement Learning Library; Yasuhiro Fujita (Preferred Networks, Inc.)*; Toshiki Kataoka (Preferred Networks, Inc.); Prabhat Nagarajan (Preferred Networks); Takahiro Ishikawa (The University of Tokyo) [external pdf link].
- Is Deep Reinforcement Learning Really Superhuman on Atari?; Marin Toromanoff (Mines ParisTech)*; Emilie Wirbel (Valeo); Fabien Moutarde (Mines ParisTech) [external pdf link].
- Single Deep Counterfactual Regret Minimization; Eric Steinberger (University of Cambridge)*.
- Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization; Hao Liu (UC Berkeley)*; Richard Socher (Salesforce); Caiming Xiong (Salesforce Research).
- AVID: Translating Human Demonstrations for Automated Training; Laura Smith (UC Berkeley)*; Nikita Dhawan (UC Berkeley); Marvin Zhang (UC Berkeley); Pieter Abbeel (UC Berkeley); Sergey Levine (UC Berkeley).
- A unified view of likelihood ratio and reparameterization gradients and an optimal importance sampling scheme; Paavo Parmas (Okinawa Inst. of Sci. and Tech)*; Masashi Sugiyama (RIKEN/The University of Tokyo).
- Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?; Ofir Nachum (Google)*; Haoran Tang (University of California Berkeley); Xingyu Lu (Berkeley); Shixiang Gu (Google Brain); Honglak Lee (Google); Sergey Levine (UC Berkeley) [external pdf link].
- Regularization Matters in Policy Optimization; Zhuang Liu (UC Berkeley)*; Xuanlin Li (UC Berkeley); Bingyi Kang (National University of Singapore); Trevor Darrell (UC Berkeley) [external pdf link].
- Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery; Kristian Hartikainen (UC Berkeley)*; XINYANG GENG (UC Berkeley); Tuomas Haarnoja (DeepMind); Sergey Levine (UC Berkeley) [external pdf link].
- Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model; Alex Lee (UC Berkeley)*; Anusha Nagabandi (UC Berkeley); Pieter Abbeel (UC Berkeley); Sergey Levine (UC Berkeley) [external pdf link].
- Evolutionary Reinforcement Learning for Sample-Efficient Multiagent Coordination; Shauharda Khadka (Intel AI Lab); Somdeb Majumdar (Intel AI Lab)*; Santiago Miret (Intel AI Lab); Stephen McAleer (Intel AI Lab); Kagan Tumer (Oregon State University US) [external pdf link].
- Multi-Agent Manipulation via Locomotion using Hierarchical Sim2Real; Ofir Nachum (Google)*; Michael Ahn (Google); Hugo Ponte (Self); Shixiang Gu (Google Brain); vikash kumar (Google) [external pdf link].
- Learning Independently-Obtainable Reward Functions; Christopher Grimm (University of Michigan)*; Satinder Singh (UMich).
- Off-Policy Actor-Critic with Shared Experience Replay; Simon Schmitt (DeepMind)*; Karen Simonyan (DeepMind); Matteo Hessel (DeepMind) [external pdf link].
- Learning with Identity and Uniqueness through Social Constraint; Hao Sun (CUHK)*; Jiankai Sun (SenseTime Group Limited); Zhenghao Peng (The Chinese University of Hong Kong); Dahua Lin (The Chinese University of Hong Kong); Bolei Zhou (CUHK).
- Semantic RL with Action Grammars: Data-Efficient Learning of Hierarchical Task Abstractions; Robert Lange (Imperial College London)*; Aldo Faisal (Imperial College London).
- The PlayStation Reinforcement Learning Environment (PSXLE); Carlos Purves (University of Cambridge)*; Cătălina Cangea (University of Cambridge); Petar Veličković (DeepMind).
- Interactive Fiction Games: A Colossal Adventure; Matthew Hausknecht (Microsoft Research)*; Prithviraj Ammanabrolu (Georgia Institute of Technology); Marc-Alexandre Côté (Microsoft Research); Xingdi Yuan (Microsoft Research) [external pdf link].
- Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards; Xingyu Lu (Berkeley)*; Stas Tiomkin (BAIR, UC Berkeley); Pieter Abbeel (UC Berkeley).
- Asynchronous Methods for Model-Based Reinforcement Learning; Ignasi Clavera (UC Berkeley)*; Yunzhi Zhang (UC Berkeley); Boren Tsai (UC Berkeley); Pieter Abbeel (UC Berkeley).
- Bottom-Up Meta-Policy Search; Luckeciano Melo (Aeronautics Institute of Technology)*; Marcos Máximo (Aeronautics Institute of Technology); Adilson Cunha (Aeronautics Institute of Technology) [external pdf link].
- Multi-Agent Hierarchical Reinforcement Learning for Humanoid Navigation; Glen Berseth (University of California Berkeley)*; Brandon Haworth (York University); Mubbasir Kapadia (Rutgers University); Petros Faloutsos (York University).
- Thinking While Moving: Deep Reinforcement Learning with Concurrent Control; Ted Xiao (Google)*; Eric Jang (Google Brain); Dmitry Kalashnikov (Google Inc.); Sergey Levine (Google); Julian Ibarz (Google); Karol Hausman (); Alexander Herzog ([X]).
- Swarm-inspired Reinforcement Learning via Collaborative Inter-agent Knowledge Distillation; Zhang-Wei Hong (Preferred Networks)*; Prabhat Nagarajan (Preferred Networks); Guilherme Maeda (Preferred Networks).
- DisCoRL: Continual Reinforcement Learning via Policy Distillation; René Traoré (ENSTA ParisTech); Hugo Caselles-Dupré (Flowers Team (ENSTA ParisTech & INRIA) & Softbank Robotics Europe); Timothee Lesort (Ensta Paristech)*; Te Sun (ENSTA ParisTech); Guanghang Cai (ENSTA ParisTech); David Filliat (ENSTA); Natalia Diaz Rodriguez (ENSTA Paris & INRIA Flowers) [external pdf link].
- V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control; Francis Song (DeepMind)*; Abbas Abdolmaleki (Google DeepMind); Jost Tobias Springenberg (DeepMind); Aidan Clark (DeepMind); Hubert Soyer (); Jack Rae (Deepmind); Seb Noury (); Arun Ahuja (DeepMind); Siqi Liu (DeepMind); Dhruva Tirumala (DeepMind); Nicolas Heess (DeepMind); Dan Belov (DeepMind); Martin Riedmiller (DeepMind); Matthew Botvinick (google) [external pdf link].
- Objective Mismatch in Model-based Reinforcement Learning; Nathan Lambert (UC Berkeley)*; Brandon Amos (Facebook); Omry Yadan (Facebook); Roberto Calandra (Facebook).
- Dream to Control: Learning Behaviors by Latent Imagination; Danijar Hafner (Google)*; Timothy Lillicrap (DeepMind); Jimmy Ba (University of Toronto); Mohammad Norouzi (Google Brain) [external pdf link].
- Behavior-Regularized Offline Reinforcement Learning; Yifan Wu (Carnegie Mellon University)*; George Tucker (Google Brain); Ofir Nachum (Google) [external pdf link].
- Visual Reinforcement Learning with Discrete Latent Variables; Michael Laskin (UC Berkeley)*; Thanard Kurutach (UC Berkeley); Pieter Abbeel (UC Berkeley).
- Options of Interest: Temporal Abstraction with Interest Functions; Khimya Khetarpal (McGill University)*; Martin Klissarov (McGill); Maxime Chevalier-Boisvert (Mila, Université de Montréal); Pierre-Luc Bacon (Stanford University); Doina Precup (McGill University).
- Receiving Uncertainty-Aware Advice in Deep Reinforcement Learning; Felipe Leno da Silva (University of Sao Paulo)*; Pablo Hernandez-Leal (Borealis AI); Bilal Kartal (Borealis AI); Matthew Taylor (Borealis AI).
- Deep Imitative Models for Flexible Inference, Planning, and Control; Nicholas Rhinehart (UC Berkeley)*; Rowan McAllister (UC Berkeley); Sergey Levine (UC Berkeley).
- Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards; Gerrit Schoettler (Siemens); Ashvin Nair (UC Berkeley)*; Jianlan Luo (UC Berkeley); Shikhar Bahl (UC Berkeley); Juan Aparicio Ojea (Siemens); Eugen Solowjow (Siemens); Sergey Levine (UC Berkeley) [external pdf link].
- Option Discovery using Deep Skill Chaining; Akhil Bagaria (Brown University)*; George Konidaris (Brown University).
- Training Agents using Upside-Down Reinforcement Learning; Rupesh Srivastava (NNAISENSE)*; Pranav Shyam (NNAISENSE); Filipe Mutz (IFES/UFES); Wojciech Jaśkowski (NNAISENSE SA); Jürgen Schmidhuber (IDSIA - Lugano) [external pdf link].
- Skew-Fit: State-Covering Self-Supervised Reinforcement Learning; Murtaza Dalal (UC Berkeley)*; Vitchyr Pong (UC Berkeley); Steven Lin (UC Berkeley); Ashvin Nair (UC Berkeley); Shikhar Bahl (UC Berkeley); Sergey Levine (UC Berkeley).
- The StarCraft Multi-Agent Challenge; Mikayel Samvelyan (Russian-Armenian University)*; Tabish Rashid (University of Oxford); Christian Schroeder de Witt (University of Oxford); Gregory Farquhar (University of Oxford); Nantas Nardelli (University of Oxford); Tim G. J. Rudner (University of Oxford); Chia-Man Hung (University of Oxford); Philip Torr (University of Oxford); Jakob Foerster (University of Oxford); Shimon Whiteson (University of Oxford).
- Imitation Learning via Off-Policy Distribution Matching; Ilya Kostrikov (Google/New York University)*; Ofir Nachum (Google); Jonathan Tompson (Google).
- Behaviour Suite for Reinforcement Learning; Ian Osband (Deep Mind Google)* [external pdf link].
- 3D macromolecule localization in cryo-electrontomography with deep reinforcement learning; Yaohui Cai (Peking University); Xiangrui Zeng (Carnegie Mellon University); Yuchen Zeng (Pennsylvania State University); Weilin Liu (Tsinghua University); Jie Jin (Chinese Academy of Sciences); Zachary Freyberg (University of Pittsburgh); Ge Yang (Chinese Academy of Sciences); Min Xu (Carnegie Mellon University)*.
- Tetris Battle – A New Environment for Single mode and Double Mode Game; Yi-Lin Sung (National Taiwan University)*.
- Improving Evolutionary Strategies With Past Descent Directions; Asier Mujika (ETH Zurich)*; Florian Meier (ETH Zurich); Marcelo Matheus Gauy (ETH Zurich); Angelika Steger (ETH Zurich) [external pdf link].
- Dynamic Vehicle Dispatching Based on Minimum Fleet A Deep Reinforcement Learning Method; zhang wenqi (Beijing University of Posts and Telecommunications); Qiang Wang (Beijing university of Posts and telecommunications)*; jingjing li (Beijing University of Posts and Telecommunications); Donghai Shi (Didi Chuxing).
- AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers; Andrey Kurenkov (Stanford University)*; Ajay Mandlekar (Stanford University); Roberto Martín-Martín (Stanford University); Animesh Garg (University of Toronto, Nvidia).
- On the Convergence of Episodic Reinforcement Learning Algorithms at the Example of RUDDER; Markus Holzleitner (LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria); José Arjona-Medina (LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria); Marius-Constantin Dinu (LIT AI Lab / University Linz ); Sepp Hochreiter (LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria)*.
- Efficient Exploration via State Marginal Matching; Lisa Lee (Carnegie Mellon University); Ben Eysenbach (Carnegie Mellon University)*; Emilio Parisotto (Carnegie Mellon University); Eric Xing (Petuum Inc. and CMU); Sergey Levine (UC Berkeley); Ruslan Salakhutdinov (Carnegie Mellon University) [external pdf link].
- Approximating two value functions instead of one: towards characterizing a new family of Deep Reinforcement Learning algorithms; Matthia Sabatelli (University of Liege)*; Gilles Louppe (University of Liège); Pierre Geurts (University of Liège); Marco Wiering (University of Groningen) [external pdf link].
- Learning Latent State Spaces for Planning through Reward Predictions; Aaron Havens (University of Illinois Urbana Champaign)*; Yi Ouyang (Preferred Networks); Prabhat Nagarajan (Preferred Networks); Yasuhiro Fujita (Preferred Networks, Inc.) [external pdf link].
- On the Design of Variational RL Algorithms; Joe Marino (California Institute of Technology)*; Alexandre Piché (Université de Montréal); Yisong Yue (Caltech).
- Conservative Policy Gradient: Reducing Variance and Instability in Off-Policy Reinforcement Learning; Chen Tessler (Technion)*; Nadav Merlis (Technion); Shie Mannor ().
- SMiRL: Surprise Minimizing RL in \\ Dynamic Environments; Glen Berseth (University of California Berkeley)*; Coline Devin (UC Berkeley); Daniel Geng (UC Berkeley); Dinesh Jayaraman (UC Berkeley); Chelsea Finn (UC Berkeley); Sergey Levine (UC Berkeley).
- Deep Dynamics Models for Learning Dexterous Manipulation; Anusha Nagabandi (UC Berkeley)*; Kurt Konolige (Google); Sergey Levine (University of California, Berkeley); Vikash Kumar (Google).
- Accelerating Training in Pommerman with Imitation and Reinforcement Learning; Hardik Meisheri (TCS Research)*; Omkar Shelke (TCS Research); Richa Verma (TCS Research); Harshad Khadilkar (TCS Research).
- QXplore: Q-learning Exploration by Maximizing Temporal Difference Error; Riley Simmons-Edler (Princeton University)*; Benjamin Eisner (Samsung Research America); Eric Mitchell (Samsung AI Center NYC); H. Sebastian Seung (Princeton University); Daniel Lee (Cornell University).
- Striving for Simplicity in Off-Policy Deep Reinforcement Learning; Rishabh Agarwal (Google Research, Brain Team)*; Dale Schuurmans (Google / University of Alberta); Mohammad Norouzi (Google Brain) [external pdf link].
- Measuring the Reliability of Reinforcement Learning Algorithms; Stephanie Chan (Google)*; Sam Fishman (Google); John Canny (UC Berkeley); Anoop Korattikara (Google); Sergio Guadarrama (Google).
- Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling; Yuping Luo (Princeton University)*; Huazhe Xu (UC Berkeley); Tengyu Ma (Stanford University).
- Characterizing Policy Divergence for Personalized Meta-Reinforcement Learning; Michael Zhang (Harvard University)*.
- Google Research Football: A Novel Reinforcement Learning Environment; Karol Kurach (Google Brain)*; Anton Raichuk (Google); Piotr Stańczyk (Google Brain); Michał Zając (Google Brain); Olivier Bachem (Google Brain); Lasse Espeholt (DeepMind); Carlos Riquelme (Google Brain); Damien Vincent (Google Brain); Marcin Michalski (Google); Olivier Bousquet (Google); Sylvain Gelly (Google Brain) [external pdf link].
- Learning an off-policy predictive state representation for deep reinforcement learning for vision-based steering in autonomous driving; Daniel Graves (Huawei)*.
- Benchmarking Safe Exploration in Deep Reinforcement Learning; Alex Ray (OpenAI)*; Joshua Achiam (OpenAI).
- Adaptive Online Planning for Continual Lifelong Learning; Kevin Lu (UC Berkeley)*; Igor Mordatch (); Pieter Abbeel (UC Berkeley) [external pdf link].
- Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations; Brijen Thananjeyan (UC Berkeley); Ashwin Balakrishna (UC Berkeley)*; Ugo Rosolia (UC Berkeley); Felix Li (UC Berkeley); Joseph Gonzalez (UC Berkeley); Sergey Levine (UC Berkeley); Francesco Borrelli (UC Berkeley); Ken Goldberg (UC Berkeley).
- Learning to Combat Compounding-Error in Model-Based Reinforcement Learning; Chenjun Xiao (University of Alberta)*; Yifan Wu (Carnegie Mellon University); Chen Ma (University of Alberta); Dale Schuurmans (Google / University of Alberta); Martin Müller (University of Alberta).
- Multi-Task Reinforcement Learning without Interference; Tianhe Yu (Stanford University)*; Saurabh Kumar (Stanford); Abhishek Gupta (UC Berkeley); Karol Hausman (Google Brain); Sergey Levine (UC Berkeley); Chelsea Finn (UC Berkeley).
- Optimal Liquidation with Deep Reinforcement Learning; Siyu Lin (University of Virginia)*; Peter Beling (University of Virginia).
- Harnessing Structures for Value-Based Planning and Reinforcement Learning; GUO ZHANG (MIT)*; Yuzhe Yang (MIT); Zhi Xu (MIT); Dina Katabi (Massachusetts Institute of Technology) [external pdf link].
- Automated curriculum generation for Policy Gradients from Demonstrations; Anirudh Srinivasan (Microsoft Research)*; Dzmitry Bahdanau (University of Montreal); Maxime Chevalier-Boisvert (Mila); Yoshua Bengio (Mila) [external pdf link].
- Marginalized State Distribution Entropy Regularization in Policy Optimization; Riashat Islam (MILA, Mcgill University)*; Zafarali Ahmed (MILA, McGill University); Doina Precup (McGill University).
- Emergent Tool Use from Multi-Agent Autocurricula; Bowen Baker (OpenAI)*; Ingmar Kanitscheider (OpenAI); Todor Markov (OpenAI); Yi Wu (UC Berkeley); Glenn Powell (OpenAI); Bob McGrew (OpenAI); Igor Mordatch ().
- Playing Dota 2 with Large Scale Deep Reinforcement Learning; Jie Tang (OpenAI)*; Filip Wolski (OpenAI); David Farhi (OpenAI); Greg Brockman (OpenAI); Brooke Chan (OpenAI); Przemylsaw Debiak (OpenAI); Christy Dennison (OpenAI); Chris Hesse (OpenAI); Rafal Jozefowicz (OpenAI); Shariq Hashme (OpenAI); Quirin Fischer (OpenAI); Scott Gray (OpenAI); Catherine Olsson (OpenAI); Jakub Pachocki (OpenAI); Michael Petrov (OpenAI); Henrique Ponde (OpenAI); Jonathan Raiman (OpenAI); Tim Salimans (OpenAI); Jeremy Schlatter (OpenAI); Szymon Sidor (OpenAI); Susan Zhang (OpenAI).
- Contextual Imagined Goals for Self-Supervised Robotic Learning; Ashvin Nair (UC Berkeley)*; Shikhar Bahl (UC Berkeley); Khazatsky Alexander (UC Berkeley); Vitchyr Pong (UC Berkeley); Glen Berseth (University of California Berkeley); Sergey Levine (UC Berkeley) [external pdf link].
- MERL: Multi-Head Reinforcement Learning; Yannis Flet-Berliac (University of Lille / Inria)*; Philippe Preux (INRIA) [external pdf link].
- Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning; Abhishek Gupta (UC Berkeley)*; vikash kumar (Google); Corey Lynch (Google); Sergey Levine (UC Berkeley); Karol Hausman ().
- Risk-Averse Domain Adaptation Under Uncertain Dynamics; Jesse Zhang (UC Berkeley)*; Brian Cheung (UC Berkeley); Chelsea Finn (UC Berkeley); Sergey Levine (UC Berkeley); Dinesh Jayaraman (UC Berkeley).
- SEERL : Sample Efficient Ensemble Reinforcement Learning; Rohan Saphal (Indian Institute of Technology Madras)*; Balaraman Ravindran (Indian Institute of Technology, Madras); Dheevatsa Mudigere (Facebook); Sasikanth Avancha (); Bharat Kaul (Intel Labs).
- Entity Abstraction in Visual Model-Based Reinforcement Learning; Rishi Veerapaneni (UC Berkeley)*; John Co-Reyes (UC Berkeley); Michael Chang (University of California, Berkeley); Michael Janner (); Chelsea Finn (UC Berkeley); Jiajun Wu (Google); Joshua Tenenbaum (MIT); Sergey Levine (UC Berkeley).
- Self-Imitation Learning via TrajectoryConditioned Policy for Hard-Exploration Tasks; Yijie Guo (University of Michigan)*; Jongwook Choi (University of Michigan); Marcin Moczulski (University of Oxford, Google Brain); Samy Bengio (Google Research, Brain Team); Mohammad Norouzi (Google Brain); Honglak Lee (Google).
- Combining Model-based and Model-free Reinforcement Learning through Evolution; Yunhao Tang (Columbia University)*.
- Reducing Overestimation Bias in Multi-Agent Domains Using Double Centralized Critics; Johannes Ackermann (Technical University of Munich)*; Volker Gabler (Technical University of Munich); Takayuki Osa (Kyushu Institute of Technology); Masashi Sugiyama (RIKEN/The University of Tokyo) [external pdf link].
- LeDeepChef: Deep Reinforcement Learning Agent for Families of Text-Based Games; Leonard Adolphs (ETHZ)*; Thomas Hofmann (ETH Zurich).
- Off-Policy Policy Gradient Algorithms by Constraining the State Distribution Shift; Riashat Islam (MILA, Mcgill University)*; Deepak Sharma (MILA, McGill University); Komal Teru (McGill University).
- If MaxEnt RL is the Answer, What is the Question?; Ben Eysenbach (Carnegie Mellon University)*; Sergey Levine (UC Berkeley) [external pdf link].
- Hallucinative Topological Memory for Visual Robotic Manipulation; Kara Liu (UC Berkeley)*.
- Curiosity-Driven Multi-Criteria Hindsight Experience Replay; John Lanier (UC Irvine)*; Stephen Mcaleer (Intel AI); Pierre Baldi (UC Irvine).
- Data Efficient Training for Reinforcement Learning with Adaptive Behavior Policy Sharing; Ge Liu (MIT CSAIL)*; Heng-Tze Cheng (Google Research); Rui Wu (Google Research); Jing Wang (Google Research); Jayden Ooi (Google); Lihong Li (Google Brain); Ang Li (DeepMind, Mountain View); Sibon Li (DeepMind,Mountain View); Craig Boutilier (Google Research).
- Hierarchical Foresight: Self-Supervised Learning of Long-Horizon Tasks via Visual Subgoal Generation; Suraj Nair (Stanford University)*; Chelsea Finn (Google Brain) [external pdf link].
- SPECTRA : Sparse Entity-Centric Transitions; Rim Assouel (umontreal.ca)*; Yoshua Bengio (Mila) [external pdf link].
- DRIFT: Deep Reinforcement Learning for Functional Software Testing; Luke Harries (Microsoft); Rebekah Clarke (Microsoft); Timothy Chapman (Microsoft); Swamy Nallamalli (Microsoft); Levent Ozgur (Microsoft); Shuktika Jain (Microsoft ); Alex Leung (Microsoft); Steve Lim (Microsoft ); Aaron Dietrich (Microsoft ); Jose Miguel Hernandez-Lobato (University of Cambridge); Tom Ellis (Microsoft); Cheng Zhang (Microsoft)*; Kamil Ciosek (Microsoft).
- Self-Imitation Learning of Locomotion Movements through Termination Curriculum; Amin Babadi (Aalto University)*; Kourosh Naderi (Aalto University); Perttu Hämäläinen (Aalto University) [external pdf link].
- Confidential Policies: Preventing Imitation Learning through Context-Adversarial Policy Ensembles; Albert Zhan (Berkeley)*; Stas Tiomkin (BAIR, UC Berkeley); Pieter Abbeel (UC Berkeley).
- Fully Bayesian Recurrent Neural Networks for Safe Reinforcement Learning; Matthew Benatan (IBM Research)*; Edward Pyzer-Knapp (IBM Research) [external pdf link].
- Learning Theory of Mind for Deep Reinforcement Learning; Michael Walton (NIWC)*; Andrew Fuchs (NIWC); Theresa Chadwick (NIWC); Doug Lange (Naval Information Warfare Center Pacific).
- ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots; Vikash Kumar (Google)*; Michael Ahn (Google); Abhishek Gupta (UC Berkeley); Sergey Levine (UC Berkeley); Henry Zhu (UC Berkeley); Kristian Hartikainen (UC Berkeley); Hugo Ponte (Self).
- Data-efficient Co-Adaptation of Morphology and Behaviour with Deep Reinforcement Learning; Kevin Sebastian Luck (Arizona State University)*; Heni Ben Amor (Arizona State University); Roberto Calandra (Facebook) [external pdf link].
- Learning Sparse Representations Incrementally in Deep Reinforcement Learning; J. Hernandez-Garcia (University of Alberta)*; Richard Sutton (University of Alberta) [external pdf link].
- Nested-Wasserstein Self-Imitation Learning for Sequence Generation; Ruiyi Zhang (Duke University)*; Changyou Chen (University at Buffalo); Zhe Gan (Microsoft); Zheng Wen (DeepMind); Wenlin Wang (Duke Univeristy); Lawrence Carin (Duke University).
- Reinforcement Learning for High-dimensional Continuous Control in Biomechanical Systems: An Intro to ArtiSynth-RL; Amir Abdi (University of British Columbia)* [external pdf link].
- Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning; Qian Long (CMU); Zihan Zhou (SJTU); Abhinav Gupta (CMU/FAIR); Fei Fang (Carnegie Mellon University); Yi Wu (UC Berkeley); Xiaolong Wang (UC Berkeley)*.
- Benchmarking Batch Deep Reinforcement Learning Algorithms; Scott Fujimoto (McGill University)*; Edoardo Conti (Facebook); Mohammad Ghavamzadeh (Facebook); Joelle Pineau (McGill / Facebook) [external pdf link].
- Blue River Controls: A toolkit for Reinforcement Learning Control Systems on Hardware; Kirill Polzounov (University of Calgary)*; Ramitha Sundar (Blue River Technology); Lee Reden (Blue River Technology).
- Corpus Compression for Deep Reinforcement Learning in Natural Language Environments; Zhiwen Tang (Georgetown University)*; Grace Hui Yang (Georgetown University).
- Adaptive Temperature Tuning for Mellowmax in Deep Reinforcement Learning; Seungchan Kim (Brown University)*; George Konidaris (Brown).
- Task-Relevant Adversarial Imitation Learning; Konrad Żołna (Jagiellonian University)*; Scott Reed (DeepMind); Ziyu Wang (-); Alexander Novikov (DeepMind); David Budden (DeepMind); Serkan Cabi (DeepMind); Sergio Gómez Colmenarejo (DeepMind); Misha Denil (DeepMind); Nando de Freitas (DeepMind) [external pdf link].
- RoboNet: Large-Scale Multi-Robot Learning; Sudeep Dasari (Carnegie Mellon University)*; Frederik Ebert (UC Berkeley); Stephen Tian (UC Berkeley); Suraj Nair (Stanford University); Bernadette Bucher (University of Pennsylvania); Karl Schmeckpeper (University of Pennsylvania); Siddharth Singh (Univerity of Pennsylvania); Chelsea Finn (Stanford); Sergey Levine (UC Berkeley) [external pdf link].
- Controlling Quantum Dot Devices using Deep Reinforcement Learning; Vu Nguyen (University of Oxford)*; Dominic T Lennon (University of Oxford); Hyungil Moon (University of Oxford); Nina M. van Esbroeck (University of Oxford); Dino Sejdinovic (University of Oxford); Michael A. Osborne (University of Oxford); G. Andrew D. Briggs (University of Oxford); Natalia Ares (University of Oxford).
- Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning; Tianhe Yu (Stanford University)*; Deirdre Quillen (Self); Zhanpeng He (University of Southern California); Ryan Julian (); Karol Hausman (Google Brain); Chelsea Finn (UC Berkeley); Sergey Levine (UC Berkeley).
- Plan2Vec: Unsupervised Representation Learningby Latent Plans; Ge Yang (University of Chicago)*; Amy Zhang (FAIR, McGill); Ari Morcos (Facebook AI Research (FAIR)); Joelle Pineau (McGill / Facebook); Pieter Abbeel (UC Berkeley); Roberto Calandra (Facebook).
- Adversarial Policies: Attacking Deep Reinforcement Learning; Adam Gleave (UC Berkeley)*; Michael Dennis (University of California Berkeley); Cody Wild (University of California Berkeley); Neel Kant (UC Berkeley); Sergey Levine (UC Berkeley); Stuart Russell (UC Berkeley).
- Be a Copycat: Uncharted Rewards by Mimicking Expert Action Sequences; Tharun Medini (Rice University)*; Anshumali Shrivastava (Rice University) [external pdf link].
- Learning To Explore Using Active Neural Mapping; Devendra Singh Chaplot (Carnegie Mellon University)*; Saurabh Gupta (UIUC); Dhiraj Gandhi (Facebook AI Research ); Abhinav Gupta (CMU/FAIR); Ruslan Salakhutdinov (Carnegie Mellon University) [external pdf link].
Late-Breaking Papers (Poster)
- Grandmaster Level in StarCraft II using Multi-Agent Reinforcement Learning; Oriol Vinyals (DeepMind), Igor Babuschkin (DeepMind), Wojciech M. Czarnecki (DeepMind), Michaël Mathieu (DeepMind), Andrew Dudzik (DeepMind), Junyoung Chung (DeepMind), David H. Choi (DeepMind), Richard Powell (DeepMind), Timo Ewalds (DeepMind), Petko Georgiev (DeepMind), Junhyuk Oh (DeepMind), Dan Horgan (DeepMind), Manuel Kroiss (DeepMind), Ivo Danihelka (DeepMind), Aja Huang (DeepMind), Laurent Sifre (DeepMind), Trevor Cai (DeepMind), John P. Agapiou (DeepMind), Max Jaderberg (DeepMind), Alexander S. Vezhnevets (DeepMind), Rémi Leblond (DeepMind), Tobias Pohlen (DeepMind), Valentin Dalibard (DeepMind), David Budden (DeepMind), Yury Sulsky (DeepMind), James Molloy (DeepMind), Tom L. Paine (DeepMind), Caglar Gulcehre (DeepMind), Ziyu Wang (DeepMind), Tobias Pfaff (DeepMind), Yuhuai Wu (DeepMind), Roman Ring (DeepMind), Dani Yogatama (DeepMind), Dario Wünsch (DeepMind), Katrina McKinney (DeepMind), Oliver Smith (DeepMind), Tom Schaul (DeepMind), Timothy Lillicrap (DeepMind), Koray Kavukcuoglu (DeepMind), Demis Hassabis (DeepMind), Chris Apps (DeepMind), David Silver (DeepMind)
- Positive-Unlabeled Reward Learning; Danfei Xu (Stanford), Misha Denil (Stanford)
- Learning to Scaffold the Development of Robotic Manipulation Skills; Lin Shao (Stanford), Toki Migimatsu (Stanford), Jeannette Bohg (Stanford)
- Improving Sample Efficiency in Model-Free Reinforcement Learning from Images; Denis Yarats (New York University, FAIR), Amy Zhang (McGill, MILA, FAIR), Ilya Kostrikov (New York University), Brandon Amos (FAIR), Joelle Pineau (McGill, MILA, FAIR), Rob Fergus (New York University, FAIR)
- Off-Policy Actor-Critic with Shared Experience Replay; Simon Schmitt (DeepMind), Matteo Hessel (DeepMind), Karen Simonyan (DeepMind)
Competition ""Learn to Move: Walk Around" Awards Papers (Poster)
- Efficient and robust reinforcement learning with uncertainty-based value expansion; Bo Zhou (Baidu); Hongsheng Zeng (Baidu); Fan Wang (Baidu); Yunxiang Li (Baidu); Hao Tian (Baidu)
- Distributed Soft Actor-Critic with Multivariate Reward Representation and Knowledge Distillation; Dmitry Akimov (HSE University)
- Sample efficient ensemble learning with Catalyst.RL; Sergey Kolesnikov (Moscow Institute of Physics and Technology); Valentin Khrulkov (Skolkovo Institute of Science and Technology)
Information about Posters
- Posters are taped to the wall with the special tabs provided at the venue.
- Please make your posters 36W x 48H inches or 90 x 122 cm.
- Posters should be on light weight paper, not laminated.
Program Committee
We would like to thank the following people for their effort in making this year's edition of the Deep RL Workshop a success.
- Pulkit Agrawal
- Maruan Al Shedivat
- Marcin Andrychowicz
- Glen Berseth
- Diana Borsa
- Noam Brown
- Roberto Calandra
- Devendra Singh Chaplot
- Richard Chen
- Ignasi Clavera
- Coline Devin
- Rocky Duan
- Harri Edwards
- Jakob Foerster
- Justin Fu
- Yasuhiro Fujita
- Shixiang Gu
- Arthur Guez
- Xiaoxiao Guo
- Abhishek Gupta
- David Ha
- Tuomas Haarnoja
- Danijar Hafner
- Jean Harb
- Anna Harutyunyan
- Matt Hausknecht
- Karol Hausman
- Rein Houthooft
- Sandy Huang
- Max Jaderberg
- Eric Jang
- Gregory Kahn
- Tejas Kulkarni
- Alex Lee
- Lisa Lee
- RyanLowe
- Kendall Lowrey
- Rowan McAllister
- Vlad Mnih
- Nikhil Mishra
- Igor Mordatch
- Ofir Nachum
- Ashvin Nair
- Karthik Narasimhan
- Junhyuk Oh
- Emilio Parisotto
- Deepak Pathak
- Xue Bin Peng
- Vitchy Pong
- Lerrel Pinto
- Janarthanan Rajendran
- Aravind Rajeswaran
- Sid Reddy
- Tim Salimans
- Pierre Sermanet
- Rohin Shah
- Max Smith
- Bradly Stadie
- Aviv Tamar
- Yuandong Tian
- Josh Tobin
- George Tucker
- Sasha Vezhnevets
- Jane Wang
- Tony Wu
- Marvin Zhang
- Zeyu Zheng
- Shangtong Zhang
- Zhongwen Xu
- Risto Vuorio
- Qi Zhang
- Jongwook Choi
- Huazhe Xu
- Yi Wu
- Markus Wulfmeier
FAQ
Q: Is it OK to submit a paper that will also be submitted to ICLR 2020?
A: Yes.
Q: Is it OK to submit a paper that was accepted into CoRL 2019?
A: Yes.
Q: Is it OK to submit a paper that was rejected from the NeurIPS main conference?
A: Yes.
Q: Will there be official archival proceedings?
A: No.
Q: Should submitted papers be anonymized?
A: Yes! If accepted, we will ask for a de-anonymized version to link on the website, like in previous years.
Q: Wait, what time *precisely* is the deadline?
A: Sept 9, 11:59 PM PST.
Q: What are dimensions for the poster?
A: 36W x 48H inches or 90 x 122 cm. Should be with light weight paper.