Deep Reinforcement Learning Workshop
NeurIPS 2022
About
In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multi-agent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.
For previous editions, please visit NeurIPS 2021, 2020, 2019, 2018, 2017, 2016, 2015.
Attending the Workshop
We will post the live-stream link here. Once it's available, chats to ask questions, GatherTown links for poster sessions, and all pre-recorded videos can be viewed through that link. NeurIPS workshop registration is required.
Important Dates and Deadlines
Paper submission deadline: October 3rd 2022
Submission site: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/DeepRL
Call for Papers & formatting instruction: [link]
Call for Opinion Talk Abstracts: [link]
Financial Support Form: [link]
Workshop date: December 9th 2022
Workshop Time: 8:25am - 5:30pm PST
Invited Speakers
Jakob Foerster
University of Oxford
Tobias Gerstenberg
Stanford University
Igor Mordatch
Google Brain
Amy Zhang
Facebook / UC Berkeley
Schedule (December 9th 2022, 8:25am-5:30pm PST)
08:25 - 08:30 Welcome and Introduction
08:30 - 09:00 Invited Talk - Tobias Gerstenberg - A counterfactual simulation model of causal judgments about social agents
09:00 - 09:15 Contributed talk - Escher: Eschewing Importance Sampling in Games by Computing a History Value Function to Estimate Regret - Stephen Marcus McAleer, Gabriele Farina, Marc Lanctot, Tuomas Sandholm
09:15 - 09:30 Contributed talk - Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training - Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang
09:30 - 09:45 Contributed talk - Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function - Ruijie Zheng, Xiyao Wang, Huazhe Xu, Furong Huang
09:45 - 10:00 Contributed talk - Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes - Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine [Best Paper Runner Up Award]
10:00 - 10:30 Invited Talk - Jakob Foerster - Opponent-Shaping and Interference in General-Sum Games
10:30 - 11:00 BREAK
11:00 - 11:30 Opinion Contributed Talk - Scott Jordan - Scientific Experiments in Reinforcement Learning
11:30 - 11:45 Contributed talk - Transformers are Sample-Efficient World Models - Vincent Micheli, Eloi Alonso, François Fleuret [Best Paper Award]
11:45 - 12:00 Contributed talk - Scaling Laws for a Multi-Agent Reinforcement Learning Model - Oren Neumann, Claudius Gros
12:00 - 12:30 Opinion Contributed Talk - Natasha Jaques - The need for social learning in RL
12:30 - 13:30 Poster session [Lunch]
13:30 - 14:00 Opinion Contributed Talk - Stephanie Chan - The World is not Uniformly Distributed; Important Implications for Deep RL
14:00 - 14:30 Invited Talk - Amy Zhang - Learning Generalist Agents
14:30 - 15:00 BREAK
15:00 - 15:30 Invited Talk - Igor Mordatch - Connections Between Sequence Modeling and Reinforcement Learning
15:30 - 15:45 Deep RL Implementation Talk - John Schulman - PPO
15:45 - 16:00 Deep RL Implementation Talk - Danijar Hafner - Dreamer
16:00 - 16:15 Deep RL Implementation Talk - Kristian Hartikainen - SAC
16:15 - 16:30 Deep RL Implementation Talk - Aviral Kumar, Ilya Kostrikov - CQL + Real Quadruped Walking from Scratch
16:30 - 17:30 Panel discussion - Stephanie Chan, Jakob Foerster, Tobias Gerstenberg, Scott Jordan, Igor Mordatch, Natasha Jaques
Organizers
Karol Hausman
Google Brain / Stanford
Suraj Nair
Stanford University
University of Alberta
University of Alberta
Risto Vuorio
University of Oxford
University of Alberta
Ted Xiao
Google Brain
Zeyu Zheng
University of Michigan
Qi Zhang
University of South Carolina
Advisory Board
UC Berkeley / Covariant
Stanford / Google
McGill / FAIR
DeepMind
University of Michigan / DeepMind
Accepted Papers
Hao Sun, Taiyi Wang
Hao Sun, Zhenghao Peng, Bo Dai, Dahua Lin, Bolei Zhou
Hao Sun, Ziping Xu, Zhenghao Peng, Meng Fang, Bo Dai, Bolei Zhou
Hao Sun, Ziping Xu, Taiyi Wang, Meng Fang, Bolei Zhou
Sean L Metzger
Melissa Mozifian, Dieter Fox, David Meger, Fabio Ramos, Animesh Garg
Junmo Cho, Donghwan Lee, Young-Gyu Yoon
Biological Neurons vs Deep Reinforcement Learning: Sample efficiency in a simulated game-world slideslive
Forough Habibollahi, Moein Khajehnejad, Amitesh Gaurav, Brett Joseph Kagan
Kishor Jothimurugan, Steve Hsu, Osbert Bastani, Rajeev Alur
Jose Antonio Martin H., Oscar Fernández Vicente, Sergio Perez, Anas Belfadil, Cristina Ibanez-Llano, Freddy José Perozo Rondón, Jose Javier Valle, Javier Arechalde Pelaz
Value-based CTDE Methods in Symmetric Two-team Markov Game: from Cooperation to Team Competition slideslive
Pascal Leroy, Jonathan Pisane, Damien Ernst
Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective slideslive
Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Russ Salakhutdinov
A General Framework for Sample-Efficient Function Approximation in Reinforcement Learning slideslive
Zixiang Chen, Chris Junchi Li, Angela Yuan, Quanquan Gu, Michael Jordan
Xing Zhou, Hao Gao, Xin Xu, Xinglong Zhang, Hongda Jia, Dongzi Wang
Shiva Kanth Sujit, Somjit Nath, Pedro Braga, Samira Ebrahimi Kahou
Dahuin Jung, Hyungyu Lee, Sungroh Yoon
Somjit Nath, Samira Ebrahimi Kahou
Shuo Cheng, Danfei Xu
Minghuan Liu, Zhengbang Zhu, Menghui Zhu, Yuzheng Zhuang, Weinan Zhang, Jianye HAO
Variance Reduction in Off-Policy Deep Reinforcement Learning using Spectral Normalization slideslive
Payal Bawa, Rafael Oliveira, Fabio Ramos
Manuel Goulão, Arlindo L. Oliveira
Matthias Gerstgrasser, Tom Danino, Sarah Keren
Yash Jakhotiya, Iman Haque
Toygun Basaklar, Suat Gumussoy, Umit Ogras
Medric Sonwa, Johanna Hansen, Eugene Belilovsky
Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots
Bilal Piot, Zhaohan Daniel Guo, Shantanu Thakoor, Mohammad Gheshlaghi Azar
Mingqi Yuan, Bo Li, Xin Jin, Wenjun Zeng
Chang Rajani, Karol Arndt, David Blanco-Mulero, Kevin Sebastian Luck, Ville Kyrki
Vincent Micheli, Eloi Alonso, François Fleuret
Matthew Macfarlane, Diederik M Roijers, Herke van Hoof
Kenny John Young, Aditya Ramesh, Louis Kirsch, Jürgen Schmidhuber
A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning slideslive
Benjamin Eysenbach, Matthieu Geist, Ruslan Salakhutdinov, Sergey Levine
Michal Nauman, Marek Cygan
Chaitanya Kharyal, Tanmay Kumar Sinha, SaiKrishna Gottipati, Srijita Das, Matthew E. Taylor
Maxwell Goldstein, Noam Brown
Siyang Wu, Tonghan Wang, Xiaoran Wu, Jingfeng Zhang, Yujing Hu, Changjie Fan, Chongjie Zhang
Baturay Saglam, Furkan Burak Mutlu, Doğan Can Çiçek, Suleyman Serdar Kozat
DRL-EPANET: Deep reinforcement learning for optimal control at scale in Water Distribution Systems slideslive
Anas Belfadil, David Modesto, Jose Antonio Martin H.
Philemon Schöpf, Sayantan Auddy, Jakob Hollenstein, Antonio Rodriguez-sanchez
Dynamic Collaborative Multi-Agent Reinforcement Learning Communication for Autonomous Drone Reforestation slideslive
Philipp Dominic Siedler
Gal Dalal, Assaf Hallak, Shie Mannor, Gal Chechik
Michael Chang, Alyssa Li Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
Wenyan Yang, Nataliya Strokina, Joni Pajarinen, Joni-kristian Kamarainen
Niko Grupen, Natasha Jaques, Been Kim, Shayegan Omidshafiei
Tae Hyun Cho, Seungyub Han, Heesoo Lee, Kyungjae Lee, Jungwoo Lee
Lunjun Zhang, Bradly C. Stadie
Samuel Kessler, Piotr Miłoś, Jack Parker-Holder, Stephen J. Roberts
Mastane Achab, Reda ALAMI, YASSER ABDELAZIZ DAHOU DJILALI, Kirill Fedyanin, Eric Moulines, Maxim Panov
Alexandre Piché, Rafael Pardinas, David Vazquez, Igor Mordatch, Igor Mordatch, Christopher Pal
Harshit Sikchi, Akanksha Saran, Wonjoon Goo, Scott Niekum
Dolton Milagres Fernandes, Pramod Kaushik, Harsh Shukla, Bapi Raju Surampudi
Samyeul Noh, Hyun Myung
Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement Learning slideslive
Mhairi Dunion, Trevor McInroe, Kevin Sebastian Luck, Josiah P. Hanna, Stefano V Albrecht
Learning a Domain-Agnostic Policy through Adversarial Representation Matching for Cross-Domain Policy Transfer slideslive
Hayato Watahiki, Ryo Iwase, Ryosuke Unno, Yoshimasa Tsuruoka
Memory-Efficient Reinforcement Learning with Priority based on Surprise and On-policyness slideslive
Ryosuke Unno, Yoshimasa Tsuruoka
Minghuan Liu, Tairan He, Weinan Zhang, Shuicheng YAN, Zhongwen Xu
Jurgis Pašukonis, Timothy P Lillicrap, Danijar Hafner
Onno Eberhard, Jakob Hollenstein, Cristina Pinneri, Georg Martius
Mikayel Samvelyan, Akbir Khan, Michael D Dennis, Minqi Jiang, Jack Parker-Holder, Jakob Nicolaus Foerster, Roberta Raileanu, Tim Rocktäschel
CASA: Bridging the Gap between Policy Improvement and Policy Evaluation with Conflict Averse Policy Iteration slideslive
Changnan Xiao, Haosen Shi, Jiajun Fan, Shihong Deng, Haiyan Yin
Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, Jonathan Tompson
Jan Robine, Marc Höftmann, Tobias Uelwer, Stefan Harmeling
Victoria Magdalena Dax, Jiachen Li, Kevin Leahy, Mykel Kochenderfer
Oren Neumann, Claudius Gros
Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang
Patrick Haluptzok, Matthew Bowers, Adam Tauman Kalai
Yiding Jiang, J Zico Kolter, Roberta Raileanu
Jianxiong Li, Xianyuan Zhan, Haoran Xu, Xiangyu Zhu, Jingjing Liu, Ya-Qin Zhang
Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function slideslive
Ruijie Zheng, Xiyao Wang, Huazhe Xu, Furong Huang
Hanmo Chen, Stone Tao, Jiaxin Chen, Weihan Shen, Xihui Li, Sikai Cheng, Xiaolong Zhu, Xiu Li
Zichen Liu, Siyi Li, Wee Sun Lee, Shuicheng YAN, Zhongwen Xu
Moo Jin Kim, Jiajun Wu, Chelsea Finn
Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt, Alexandre Lacoste, Sai Rajeswar
Bryan Lim, Manon Flageat, Antoine Cully
Zhendong Wang, Jonathan J Hunt, Mingyuan Zhou
Adrien Ali Taiga, Rishabh Agarwal, Jesse Farebrother, Aaron Courville, Marc G Bellemare
Edoardo Cetin, Benjamin Paul Chamberlain, Michael M. Bronstein, Jonathan J Hunt
Remo Sasso, Matthia Sabatelli, Marco A. Wiering
Dhruv Shah, Arjun Bhorkar, Hrishit Leen, Ilya Kostrikov, Nicholas Rhinehart, Sergey Levine
Dhruv Shah, Ajay Sridhar, Arjun Bhorkar, Noriaki Hirose, Sergey Levine
Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning slideslive
Anton Bakhtin, David J Wu, Adam Lerer, Jonathan Gray, Athul Paul Jacob, Gabriele Farina, Alexander H Miller, Noam Brown
Seth Karten, Mycal Tucker, Siva Kailas, Katia P. Sycara
Jesse Zhang, Karl Pertsch, Jiahui Zhang, Taewook Nam, Sung Ju Hwang, Xiang Ren, Joseph J Lim
Yifan Xu, Nicklas Hansen, Zirui Wang, Yung-Chieh Chan, Hao Su, Zhuowen Tu
Chris Lu, Timon Willi, Alistair Letcher, Jakob Nicolaus Foerster
Pu Hua, Yubei Chen, Huazhe Xu
Sateesh Kumar, Jonathan Zamora, Nicklas Hansen, Rishabh Jangir, Xiaolong Wang
Linfeng Zhao, Huazhe Xu, Lawson L.S. Wong
ESCHER: Eschewing Importance Sampling in Games by Computing a History Value Function to Estimate Regret slideslive
Stephen Marcus McAleer, Gabriele Farina, Marc Lanctot, Tuomas Sandholm
Nicklas Hansen, Yixin Lin, Hao Su, Xiaolong Wang, Vikash Kumar, Aravind Rajeswaran
Marc Höftmann, Jan Robine, Stefan Harmeling
Michael Laskin, Luyu Wang, Junhyuk Oh, Emilio Parisotto, Stephen Spencer, Richie Steigerwald, DJ Strouse, Steven Stenberg Hansen, Angelos Filos, Ethan Brooks, Maxime Gazeau, Himanshu Sahni, Satinder Singh, Volodymyr Mnih
Trevor McInroe, Lukas Schäfer, Stefano V Albrecht
Geraud Nangue Tasse, Devon Jarvis, Steven James, Benjamin Rosman
Jean-Baptiste Gaya, Thang Doan, Lucas Caccia, Laure Soulier, Ludovic Denoyer, Roberta Raileanu
Matthew Chang, Saurabh Gupta
Yanjie Ze, Nicklas Hansen, Yinbo Chen, Mohit Jain, Xiaolong Wang
Jiayuan Gu, Devendra Singh Chaplot, Hao Su, Jitendra Malik
A study of natural robustness of deep reinforcement learning algorithms towards adversarial perturbations slideslive
Qisai Liu, Xian Yeow Lee, Soumik Sarkar
Kyle Beltran Hatch, Sarthak J Shetty, Benjamin Eysenbach, Tianhe Yu, Rafael Rafailov, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn
Grace Zhang, Ayush Jain, Injune Hwang, Shao-Hua Sun, Joseph J Lim
Zhiao Huang, Litian Liang, Zhan Ling, Xuanlin Li, Chuang Gan, Hao Su
Tony Tong Wang, Adam Gleave, Nora Belrose, Tom Tseng, Michael D Dennis, Yawen Duan, Viktor Pogrebniak, Joseph Miller, Sergey Levine, Stuart Russell
Pierluca D'Oro, Max Schwarzer, Evgenii Nikishin, Pierre-Luc Bacon, Marc G Bellemare, Aaron Courville
Stone Tao, Xiaochen Li, Tongzhou Mu, Zhiao Huang, Yuzhe Qin, Hao Su
Off-policy Reinforcement Learning with Optimistic Exploration and Distribution Correction slideslive
Jiachen Li, Shuo Cheng, Zhenyu Liao, Huayan Wang, William Yang Wang, Qinxun Bai
Ramnath Kumar, Dheeraj Mysore Nagaraj
Ramnath Kumar, Tristan Deleu, Yoshua Bengio
Allan Zhou, Vikash Kumar, Chelsea Finn, Aravind Rajeswaran
SEM2: Enhance Sample Efficiency and Robustness of End-to-end Urban Autonomous Driving via Semantic Masked World Model slideslive
Zeyu Gao, Yao Mu, Ruoyan Shen, Chen Chen, Yangang Ren, Jianyu Chen, Shengbo Eben Li, Ping Luo, Yanfeng Lu
Max Sobol Mark, Ali Ghadirzadeh, Xi Chen, Chelsea Finn
Qinsheng Zhang, Arjun Krishna, Sehoon Ha, Yongxin Chen
Stefan Sylvius Wagner, Peter Arndt, Jan Robine, Stefan Harmeling
Michał Zawalski, Michał Tyrolski, Konrad Czechowski, Damian Stachura, Piotr Piękos, Tomasz Odrzygóźdź, Yuhuai Wu, Łukasz Kuciński, Piotr Miłoś
Thomas Avé, Kevin Mets, Tom De Schepper, Steven Latre
Pengyi Li, Hongyao Tang, Jianye HAO, YAN ZHENG, Xian Fu, Zhaopeng Meng
EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model slideslive
Yifu Yuan, Jianye HAO, Fei Ni, Yao Mu, YAN ZHENG, Yujing Hu, Jinyi Liu, Yingfeng Chen, Changjie Fan
Tim Pearce, Tabish Rashid, Anssi Kanervisto, David Bignell, Mingfei Sun, Raluca Georgescu, Sergio Valcarcel Macua, Shan Zheng Tan, Ida Momennejad, Katja Hofmann, Sam Devlin
Chang Yang, RUIYU WANG, Xinrun Wang, Zhen Wang
Fabian Paischer, Thomas Adler, Andreas Radler, Markus Hofmarcher, Sepp Hochreiter
Rong-Jun Qin, Feng Chen, Tonghan Wang, Lei Yuan, Xiaoran Wu, Yipeng Kang, Zongzhang Zhang, Chongjie Zhang, Yang Yu
Domain Invariant Q-Learning for model-free robust continuous control under visual distractions slideslive
Tom Dupuis, Jaonary Rabarisoa, Quoc Cuong PHAM, David Filliat
Vaibhav Saxena, Jimmy Ba, Danijar Hafner
Louis Bagot, Kevin Mets, Tom De Schepper, Steven Latre
Nathan Grinsztajn, Toby Johnstone, Johan Ferret, Philippe Preux
Rose E Wang, Jesse Mu, Dilip Arumugam, Natasha Jaques, Noah Goodman
Yihao Feng, Shentao Yang, Shujian Zhang, Jianguo Zhang, Caiming Xiong, Mingyuan Zhou, Huan Wang
Variance Double-Down: The Small Batch Size Anomaly in Multistep Deep Reinforcement Learning slideslive
Johan Samir Obando Ceron, Marc G Bellemare, Pablo Samuel Castro
Anikait Singh, Aviral Kumar, quan vuong, Yevgen Chebotar, Sergey Levine
Anikait Singh, Aviral Kumar, Frederik Ebert, Yanlai Yang, Chelsea Finn, Sergey Levine
Qiyang Li, Aviral Kumar, Ilya Kostrikov, Sergey Levine
Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine
Fengdi Che, Xiru Zhu, Doina Precup, David Meger, Gregory Dudek
Reza Kakooee, Benjamin Dillenburger
Keiran Paster, Silviu Pitis, Sheila A. McIlraith, Jimmy Ba
Jesse Farebrother, Joshua Greaves, Rishabh Agarwal, Charline Le Lan, Ross Goroshin, Pablo Samuel Castro, Marc G Bellemare
Hengyuan Hu, David J Wu, Adam Lerer, Jakob Nicolaus Foerster, Noam Brown
Eddy Hudson, Ishan Durugkar, Garrett Warnell, Peter Stone
Risto Vuorio, Pim De Haan, Johann Brehmer, Hanno Ackermann, Daniel Dijkman, Taco Cohen
Mikael Henaff, Minqi Jiang, Roberta Raileanu
Aggressive Q-Learning with Ensembles: Achieving Both High Sample Efficiency and High Asymptotic Performance slideslive
Yanqiu Wu, Xinyue Chen, Che Wang, Yiming Zhang, Keith W. Ross
Joey Hong, Aviral Kumar, Sergey Levine
Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, Sarath Chandar
Samuel Sokota, Ryan D'Orazio, J Zico Kolter, Nicolas Loizou, Marc Lanctot, Ioannis Mitliagkas, Noam Brown, Christian Kroer
Lauro Langosco, David Krueger, Adam Gleave
John Banister Lanier, Stephen Marcus McAleer, Pierre Baldi, Roy Fox
Gaoyue Zhou, Liyiming Ke, Siddhartha Srinivasa, Abhinav Gupta, Aravind Rajeswaran, Vikash Kumar
Dirk Eilers, Felippe Schmoeller Roza, Karsten Roscher
Josiah D Coad, James Ault, Jeff Hykin, Guni Sharon
Gaoyue Zhou, Victoria Dean, Mohan Kumar Srirama, Aravind Rajeswaran, Jyothish Pari, Kyle Beltran Hatch, Aryan Jain, Tianhe Yu, Pieter Abbeel, Lerrel Pinto, Chelsea Finn, Abhinav Gupta
Daniel Jarrett, Corentin Tallec, Florent Altché, Thomas Mesnard, Remi Munos, Michal Valko
Kyle Hsu, Tyler Ga Wei Lum, Ruohan Gao, Shixiang Shane Gu, Jiajun Wu, Chelsea Finn
Zhixuan Lin, Pierluca D'Oro, Evgenii Nikishin, Aaron Courville
Joshua P Zitovsky, Daniel de Marchi, Rishabh Agarwal, Michael Rene Kosorok
Haoyang Xu, Jimmy Ba, Silviu Pitis, Harris Chan
Lev E McKinney, Yawen Duan, David Krueger, Adam Gleave
Yecheng Jason Ma, Kausik Sivakumar, Osbert Bastani, Dinesh Jayaraman
Bryon Tjanaka, Matthew Christopher Fontaine, Aniruddha Kalkar, Stefanos Nikolaidis
Yijie Guo, Yao Fu, Run Peng, Honglak Lee
Jiamin He, Yi Wan, A. Rupam Mahmood
Abdus Salam Azad, Izzeddin Gur, Aleksandra Faust, Pieter Abbeel, Ion Stoica
Yuzhe Qin, Binghao Huang, Zhao-Heng Yin, Hao Su, Xiaolong Wang
Rahul Siripurapu, Vihang Prakash Patil, Kajetan Schweighofer, Marius-Constantin Dinu, Thomas Schmied, Luis Eduardo Ferro Diez, Markus Holzleitner, Hamid Eghbal-zadeh, Michael K Kopp, Sepp Hochreiter
Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes slideslive
Min Zhang, Hongyao Tang, Jianye HAO, YAN ZHENG
Andrew C Li, Zizhao Chen, Pashootan Vaezipoor, Toryn Q. Klassen, Rodrigo Toro Icarte, Sheila A. McIlraith
Adithya Ramesh, Balaraman Ravindran
Analyzing the Sensitivity to Policy-Value Decoupling in Deep Reinforcement Learning Generalization slideslive
Nasik Muhammad Nafi, Raja Farrukh Ali, William Hsu
Raja Farrukh Ali, Nasik Muhammad Nafi, Kevin Duong, William Hsu
Samantha Johnson, Michael A Buice, Koosha Khalvati
Cristina Pinneri, Georg Martius, Andreas Krause
Sudeep Dasari, Abhinav Gupta, Vikash Kumar
Wilka Torrico Carvalho, Angelos Filos, Richard Lewis, Honglak Lee, Satinder Singh
Program Committee
We would like to thank the following people for their effort in making this year's edition of the Deep RL Workshop a success!
Ademi Adeniji
Aditya Modi
Alex Lewandowski
Alexander Khazatsky
Annie S Chen
Annie Xie
Anuj Mahajan
Arash Tavakoli
Artem Molchanov
Avinash Ummadisingu
Aviv Tamar
Brandon Amos
Charline Le Lan
Chen Tessler
Chris Lu
Clément Bonnet
David Janz
Derek Hsu
Dhruv Shah
Dilip Arumugam
Dingyang Chen
Eric Heiden
Ethan Brooks
Evan Zheran Liu
Fei Deng
Frits de Nijs
Gautham Vasan
Geraud Nangue Tasse
Glen Berseth
Hager Radi
Hao Liu
Haozhu Wang
Harris Chan
Haseeb Shah
Ikechukwu Uchendu
Ilya Kostrikov
Jacky Liang
Jacob Beck
Jesse Farebrother
Jianhai Su
Jonathan Wilder Lavington
Jongwook Choi
Jun Luo
Junhyuk Oh
Kaylee Burns
Keerthana Gopalakrishnan
Khimya Khetarpal
Kimin Lee
Krishnan Srinivasan
Kristian Hartikainen
Kuan Fang
Kyle Hsu
Lisa Lee
Louis Kirsch
Marcin Andrychowicz
Matthew Chang
Matthew Thomas Jackson
Max Smith
Michael Janner
Michael Przystupa
Nasik Muhammad Nafi
Nelson Vadori
Ozsel Kilinc
Richard Chen
Ruihan Yang
Shangtong Zhang
Siddharth Karamcheti
Siddharth Reddy
Sungryull Sohn
Taylor W. Killian
Thomas Degris
Timon Willi
Tom Zahavy
Wilka Torrico Carvalho
Yanchao Sun
Yasuhiro Fujita
Yevgen Chebotar
Yijie Guo
Zhang-Wei Hong
Zheng Xiong
Zhongwen Xu
Ziyang Tang
Quan Vuong