Accepted Papers

Below are the camera-ready submissions submitted and presented at the workshop at ICML 2019.

  • [Best Paper] Benchmarking Bonus-Based Exploration Methods on the Arcade Learning Environment. Adrien Ali Taiga (MILA, Université de Montréal )*; Marc G. Bellemare (Google Brain); Aaron Courville (MILA, Université de Montréal); Liam Fedus (Google); Marlos C. Machado (Google Brain) [pdf]
  • [Best Paper] Simple Regret Minimization for Contextual Bandits. Aniket Anand Deshmukh (Microsoft)*; Srinagesh Sharma (University of Michigan); James Cutler (University of Michigan); Mark Moldwin (University of Michigan); Clayton Scott (University of Michigan) [pdf]
  • [Spotlight] Overcoming Exploration With Play. Corey Lynch (Google)* [pdf]
  • [Spotlight] Optimistic Exploration with Pessimistic Initialisation. Tabish Rashid (University of Oxford)*; Bei Peng (University of Oxford); Wendelin Boehmer (University of Oxford); Shimon Whiteson (University of Oxford) [pdf]
  • [Spotlight] Scheduled Intrinsic Drive: A Hierarchical Take on Intrinsically Motivated Exploration. Jingwei Zhang (Autonomous Intelligent Systems University of Freiburg)*; Niklas Wetzel (University of Freiburg); Nicolai Dorka (University of Freiburg); Joschka Boedecker (University of Freiburg); Wolfram Burgard (University of Freiburg) [pdf]
  • [Spotlight] Generative Exploration and Exploitation. Jiechuan Jiang (Peking University); Zongqing Lu (Peking University)* [pdf]
  • [Spotlight] The Journey is the Reward: Unsupervised Learning of Influential Trajectories. Jonathan Binas (Mila, Montreal)*; Sherjil Ozair (Mila); Yoshua Bengio (Mila) [pdf]
  • Curious iLQR: Resolving Uncertainty in Model-based RL. Sarah M.E Bechtle (Max Planck Institute for Intelligent Systems)* [pdf]
  • An Empirical and Conceptual Categorization of Value-based Exploration Methods. Niko Yasui (University of Alberta)*; Cameron Linke (University of Alberta); Sungsu Lim (University of Alberta); Adam White (DeepMind); Martha White (University of Alberta) [pdf]
  • Skew-Fit: State-Covering Self-Supervised Reinforcement Learning. Vitchyr H Pong (UC Berkeley)*; Murtaza Dalal (UC Berkeley); Steven Lin (UC Berkeley); Ashvin V Nair (UC Berkeley); Shikhar Bahl (UC Berkeley); Sergey Levine (UC Berkeley) [pdf]
  • Optimistic Proximal Policy Optimization. Takahisa Imagawa (AIST)* [pdf]
  • Exploration with Unreliable Intrinsic Reward in Multi-Agent Reinforcement Learning. Wendelin Boehmer (University of Oxford)*; Tabish Rashid (University of Oxford); Shimon Whiteson (University of Oxford) [pdf]
  • Parameterized Exploration. Jesse Clifton (North Carolina State University)*; Lili Wu (North Carolina State University); Eric Laber (NCSU) [pdf]
  • Improved Tree Search for Code Synthesis. Aran Carmon (Tel Aviv University); Lior Wolf (Tel Aviv University, Israel)* [pdf]
  • Efficient Exploration in Side-Scrolling Video Games with Trajectory Replay. Shi-Chun Tsai (National Chiao Tung University)*; I-Huan Chiang (National Chiao Tung University) [pdf]
  • Hypothesis Driven Exploration for Deep Reinforcement Learning. Caleb C Chuck (University of Texas at Austin)*; Scott Niekum (UT Austin); Supawit Chockchowwat (The University of Texas at Austin) [pdf]
  • Learning latent state representation for speeding up exploration. Giulia Vezzani (Istituto Italiano di Tecnologia)*; Abhishek Gupta (UC Berkeley); Lorenzo Natale (Italian Institute of Technology); Pieter Abbeel (UC Berkeley) [pdf]
  • MuleX: Disentangling Exploration and Exploitation in Deep Reinforcement Learning. Lucas Beyer (Google Brain)*; Damien Vincent (Google Brain); Olivier Teboul (Google Brain); Matthieu Geist (Google Brain); Olivier Pietquin (Google Research - Brain Team) [pdf]
  • Epistemic Risk-Sensitive Reinforcement Learning. Hannes Eriksson (Chalmers)*; Christos Dimitrakakis (Chalmers university of technology) [pdf]
  • Near-optimal Optimistic Reinforcement Learning using Empirical Bernstein Inequalities. Aristide Charles Yedia Tossou (Chalmers University of Technology)*; Debabrota Basu (National University of Singapore); Christos Dimitrakakis (Chalmers university of technology) [pdf]