Accepted Papers

Contributed Talks

CT 1: When to Trust Your Model: Model-Based Policy Optimization

Michael Janner (UC Berkeley)*; Justin Fu (UC Berkeley); Marvin Zhang (UC Berkeley); Sergey Levine (UC Berkeley)

CT 2: Model Based Planning with Energy Based Models

Yilun Du (MIT)*; Toru Lin (MIT); Igor Mordatch (OpenAI)

CT 3: A Perspective on Objects and Systematic Generalization in Model-Based RL

Sjoerd van Steenkiste (IDSIA)*; Klaus Greff (IDSIA); Jürgen Schmidhuber (IDSIA - Lugano)

CT 4: An inference perspective on model-based reinforcement learning

Joe Marino (Caltech)*; Yisong Yue (Caltech)

CT 5: Reducing Noise in GAN Training with Variance Reduced Extragradient [Arxiv version]

Tatjana Chavdarova (Mila & Idiap & EPFL)*; Gauthier Gidel (Mila, Université de Montréal, Element AI); Francois Fleuret (Idiap Research Institute); Simon Lacoste-Julien (Mila, Université de Montréal)

Spotlights

Session 1: 10:00-10:30

Bayesian Inference to Identify the Cause of Human Errors

Ramya Ramakrishnan (Massachusetts Institute of Technology)*; Vaibhav V Unhelkar (MIT); Ece Kamar (Microsoft Research); Julie A. Shah (MIT)

COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration

Nick Watters (DeepMind)*; Christopher Burgess (DeepMind); Loic Matthey (DeepMind); Alexander Lerchner (DeepMind); Matko Bosnjak (DeepMind)

A Top-Down Bottom-Up Approach to Learning Hierarchical Physics Models for Manipulation

Nima Fazeli (MIT)*; Alberto Rodriguez (MIT)

Discovering, Predicting, and Planning with Objects

John D Co-Reyes (UC Berkeley); Rishi Veerapaneni (UC Berkeley); Michael Chang (University of California, Berkeley)*; Michael Janner (UC Berkeley); Chelsea Finn (UC Berkeley); Jiajun Wu (MIT); Joshua Tenenbaum (MIT); Sergey Levine (UC Berkeley)

FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery Arxiv

Krishna Kumar Singh (University of California Davis)*; Utkarsh Ojha (University of California, Davis); Yong Jae Lee (University of California, Davis)

Generalized Hidden Parameter MDPs for Model-based Meta-Reinforcement Learning

Christian Perez (Uber AI Labs); Felipe Petroski Such (Uber AI Labs); Theofanis Karaletsos (Uber AI Labs)*

HEDGE: Hierarchical Event-Driven Generation

Karl Pertsch (University of Southern California)*; Oleh Rybkin (University of Pennsylvania); Frederik D Ebert (UC Berkeley); Dinesh Jayaraman (UC Berkeley); Chelsea Finn (UC Berkeley); Sergey Levine (UC Berkeley)

Improved Conditional VRNNs for Video Prediction Arxiv

Lluis Castrejon (Mila - Universite de Montreal)*; Nicolas Ballas (Facebook FAIR); Aaron Courville (MILA, Université de Montréal)

Improvisation through Physical Understanding: Using Novel Objects as Tools with Visual Foresight Arxiv

Annie Xie (UC Berkeley)*; Frederik D Ebert (UC Berkeley); Sergey Levine (UC Berkeley); Chelsea Finn (UC Berkeley)

Learning Feedback Linearization by Model-Free Reinforcement Learning

Tyler Westenbroek (UC Berkeley EECS)*; David Fridovich-Keil (UC Berkeley EECS); Eric Mazumdar (UC Berkeley EECS); Claire Tomlin (UC Berkeley); Shankar Sastry (EECS University of California, Berkeley)

Learning models for mode-based planning

Joao Loula (MIT)*; Tom Silver (MIT); Kelsey Allen (MIT); Joshua Tenenbaum (MIT)

Deep Knowledge Based Agent

Ali Davody (Romanian Institute of Science)

Session 2: 14:30-15:00

Manipulation by Feel: Touch-Based Control with Deep Predictive Models

Frederik D Ebert (UC Berkeley)*; Stephen Tian (UC Berkeley); Chelsea Finn (UC Berkeley); Mayur Mudigonda (UC Berkeley); Dinesh Jayaraman (UC Berkeley); Sergey Levine (UC Berkeley)

Model-based Policy Gradients with Entropy Exploration through Sampling

Samuel Stanton (Cornell University)*; Ke Alexander Wang (Cornell University); Andrew Gordon Wilson (Cornell University)

Model-Based Reinforcement Learning for Atari Appendix

Piotr J Kozakowski (University of Warsaw)*; Lukasz Kaiser (Google); Mohammad Babaeizadeh (University of Illinois at Urbana-Champaign); Piotr Miłoś (University of Warsaw); Błażej B Osiński (deepsense.ai); Roy Campbell (University of Illinois at Urbana-Champaign); Konrad Czechowski (University of Warsaw); Dumitru Erhan (Google Brain); Chelsea Finn (Google Brain); Sergey Levine (Google); Ryan Sepassi (Google Brain); George Tucker (Google Brain); Henryk Michalewski (University or Warsaw)

Nested Reasoning About Autonomous Agents Using Probabilistic Programs

Iris R Seaman (Northeastern University)*; Jan-Willem van de Meent (Northeastern); David Wingate (Brigham Young University)

Learning to Predict Without Looking Ahead: Worldmodels Without Forward Prediction

Daniel Freeman (Google Brain)*; Luke Metz (Google Brain); David Ha (Google)

Physics-as-Inverse-Graphics: Joint Unsupervised Learning of Objects and Physics from Video Appendix Arxiv

Miguel Jaques (University of Edinburgh)*; Timothy Hospedales (Edinburgh University)

Planning to Explore Visual Environments without Rewards

Danijar Hafner (Google)*; Jimmy Ba (University of Toronto); Mohammad Norouzi (Google Brain); Timothy Lillicrap (DeepMind)

PRECOG: PREdictions Conditioned On Goals in Visual Multi-Agent Scenarios Arxiv

Nicholas Rhinehart (CMU); Rowan McAllister (UC Berkeley); Kris Kitani (CMU), Sergey Levine (UC Berkeley)

Regularizing Trajectory Optimization with Denoising Autoencoders

Rinu Boney (Aalto University)*; Norman Di Palo (Sapienza University of Rome); Mathias Berglund (The Curious AI Company); Alexander Ilin (Aalto University); Juho Kannala (Aalto University, Finland); Antti Rasmus (The Curious AI Company); Harri Valpola (The Curious AI Company)

Towards Jumpy Planning

Akilesh B (Mila)*; Suriya Singh (MILA, École Polytechnique de Montréal ); Anirudh Goyal (University of Montreal); Alexander Neitz (Max Planck Institute for Intelligent Systems); Aaron Courville (MILA, Université de Montréal)

Variational Temporal Abstraction

Taesup Kim (Université de Montréal)*; Sungjin Ahn (Rutgers University); Yoshua Bengio (Université de Montréal)

Visual Planning with Semi-Supervised Stochastic Action Representations

Karl Schmeckpeper (University of Pennsylvania)*; David K Han (Army Research Laboratory); Kostas Daniilidis (University of Pennsylvania); Oleh Rybkin (University of Pennsylvania)

World Programs for Model-Based Learning and Planning in Compositional State and Action Spaces

Marwin Segler (Benevolent AI)

Online Learning and Planning in Partially Observable Domains without Prior Knowledge Appendix

Yunlong Liu (Xiamen University); Jianyang Zheng (Xiamen University)