All accepted papers can be viewed together on OpenReview
Learning Rate-Free Reinforcement Learning: A Case for Model Selection with Non-Stationary Objectives
Aida Afshar, Aldo Pacciano
Sequential Decision-Making for Inline Text Autocomplete
Rohan Chitnis, Shentao Yang, Alborz Geramifard
A Study of the Weighted Multi-step Loss Impact on the Predictive Error and the Return in MBRL
Abdelhakim Benechehab, Albert Thomas, Giuseppe Paolo, Maurizio Filippone, Balázs Kégl
Context Aware Policy Adaptation: Towards Robust Safe Reinforcement Learning
Phillip Odom, Eric Squires, Zsolt Kira
The Interpretability of Codebooks in Model-Based Reinforcement Learning is Limited
Kenneth Eaton, Jonathan C Balloch, Julia Kim, Mark Riedl
Recurrent Policies Are Not Enough for Continual Reinforcement Learning
Nathan Samuel de Lara, Veronica Chelu, Doina Precup
Fast TRAC: A Parameter-Free Optimizer for Lifelong Reinforcement Learning
Aneesh Muppidi, Zhiyu Zhang, Heng Yang
Avoiding Value Estimation Error in Off-Policy Deep Reinforcement Learning
Jared Markowitz, Jesse Silverberg, Gary Lynn Collins
Can we hop in general? A discussion of benchmark selection and design using the Hopper environment
Claas A Voelcker, Marcel Hussing, Eric Eaton