Learning Disentangled Representations:
from Perception to Control
NIPS 2017 Workshop
An important facet of human experience is our ability to break down what we observe and interact with, along characteristic lines. Visual scenes consist of separate objects, which may have different poses and identities within their category. In natural language, the syntax and semantics of a sentence can often be separated from one another. In planning and cognition plans can be broken down into immediate and long term goals. Inspired by this much research in deep representation learning has gone into finding disentangled factors of variation. However, this research often lacks a clear definition of what disentangling is or much relation to work in other branches of machine learning, neuroscience or cognitive science. In this workshop we intend to bring a wide swathe of scientists studying disentangled representations under one roof to try to come to a unified view of the problem of disentangling.
The workshop will address these issues through 3 focuses: What is disentangling: Are disentangled representations just the same as statistically independent representations, or is there something more? How does disentangling relate to interpretability? Can we define what it means to separate style and content, or is human judgement the final arbiter? Are disentangled representations the same as equivariant representations? How can disentangled representations be discovered: What is the current state of the art in learning disentangled representations? What are the cognitive and neural underpinnings of disentangled representations in animals and humans? Most work in disentangling has focused on perception, but we will encourage dialogue with researchers in natural language processing and reinforcement learning as well as neuroscientists and cognitive scientists. Why do we care about disentangling: What are the downstream tasks that can benefit from using disentangled representations? Does the downstream task define the relevance of the disentanglement to learn? What does disentangling get us in terms of improved prediction or behavior in intelligent agents?
This workshop is collocated with NIPS 2017 and will take place on Saturday 9th of December 2017 in Room 203 of the Long Beach Convention Center.
8:30 - 9:00 Set up Posters & Welcome: Josh Tenenbaum
9:00 - 9:30 Stefano Soatto - Emergence of Invariance and Disentangling in Deep Representations
9:30 - 10:00 Irina Higgins - Unsupervised Disentangling or How to Transfer Skills and Imagine Things
10:00 - 10:30 Finale Doshi-Velez - Counterfactually-Faithful Explanation: An Application for Disentangled Representations
10:30 - 11:00 Poster Session & Break
11:00 - 11:30 Doris Tsao - The Neural Code for Visual Objects
11:30 - 12:15 Poster Spotlights (4 minutes each):
- Chris Burgess - Understanding Disentangling in beta-VAE
- Abhishek Kumar - Variational Inference of Disentangled Latents from Unlabeled Observations
- Sergey Tulyakov - On Disentangling Motion and Content for Video Generation
- Valentin Thomas - Disentangling the independently controllable factors of variation by interacting with the world
- Charlie Nash - The Multi-Entity Variational Autoencoder
- Giambattista Parascandolo - Learning Independent Causal Mechanisms
- Cian Eastwood - A Framework for the Quantitative Evaluation of Disentangled Representations
- Hyunjik Kim - Disentangling by Factorising
Lunch Break 12:15 - 14:00
14:00 - 14:30 Doina Precup - Learning independently controllable features for temporal abstraction
14:30 - 15:00 Pushmeet Kohli - Exploring the different paths to achieving disentangled representations
15:00 - 15:30 Poster Session & Break
15:30 - 16:00 Yoshua Bengio - Priors to help automatically discover and disentangle explanatory factors
16:00 - 16:30 Ahmed Elgammal - Generalized Separation of Style and Content on Manifolds: The role of Homeomorphism
16:30 - 17:00 Final Poster Break
17:00 - 18:00 Panel discussion
- Learning 6-DOF Grasping Interaction via Deep 3D Geometry-aware Representations.
- Xinchen Yan, Jasmine Hsu, Mohi Khansari, Yunfei Bai, Arkanath Pathak, Abhinav Gupta, James Davidson and Honglak Lee.
- Semantically Decomposing the Latent Spaces of Generative Adversarial Networks.
- Chris Donahue, Zachary C. Lipton, Akshay Balasubramani and Julian McAuley.
- Disentanglement by Penalizing Correlation.
- Mikael Kågebäck and Olof Mogren.
- Disentangled Representations for Manipulation of Sentiment in Text.
- Maria Larsson, Amanda Nilsson and Mikael Kågebäck.
- Disentangling by Factorising.
- Hyunjik Kim and Andriy Mnih.
- Quantifying the Effects of Enforcing Disentanglement on Variational Autoencoders.
- Momchil Peychev, Petar Velickovic and Pietro Liò.
- Adversarially Regularized Autoencoders for Unaligned Text Style-Transfer.
- Junbo "Jake" Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush and Yann LeCun.
- A Framework for the Quantitative Evaluation of Disentangled Representations.
- Cian Eastwood and Christopher K. I. Williams.
- Improved Neural Text Attribute Transfer with Non-parallel Data.
- Igor Melnyk, Cicero Nogueria dos Santos, Kahini Wadhawan and Inkit Padhi
- Disentangling Dynamics and Content for Control and Planning.
- Ershad Banijamali, Ahmad Khajenezhad, Ali Ghodsi and Mohammad Ghavamzadeh
- Learning Independent Causal Mechanisms.
- Giambattista Parascandolo, Mateo Rojas-Carulla, Niki Kilbertus and Bernhard Schölkopf.
- The Multi-Entity Variational Autoencoder.
- Charlie Nash, Ali Eslami, Christopher P. Burgess, Irina Higgins, Daniel Zoran, Theophane Weber and Peter Battaglia.
- Discovering Disentangled Representations with the F-Statistic Loss.
- Karl Ridgeway and Michael C. Mozer
- Natural Language Multitasking - Analyzing and Improving Syntactic Saliency of Hidden Representations.
- Gino Brunner, Yuyi Wang, Roger Wattenhofer and Michael Weigelt.
- Disentangling Video with Independent Prediction.
- William F. Whitney and Rob Fergus.
- JADE: Joint Autoencoders for Dis-Entanglement.
- Amir-Hossein Karimi, Ershad Banijamali, Alexander Wong and Ali Ghodsi.
- Disentangling the independently controllable factors of variation by interacting with the world.
- Valentin Thomas, Emmanuel Bengio, William Fedus, Jules Pondard, Philippe Beaudoin, Hugo Larochelle, Joelle Pineau, Doina Precup and Yoshua Bengio.
- On Disentangling Motion and Content for Video Generation.
- Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang and Jan Kautz.
- Variational Inference of Disentangled Latents from Unlabeled Observations.
- Abhishek Kumar, Prasanna Sattageri and Avinash Balakrishnan.
- Understanding Disentangling in beta-VAE.
- Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins and Alexander Lerchner.
5 November 2017: Submission date for papers
13 November 2017: Acceptance notification
24 November 2017: Camera ready due
4–9 December 2017: NIPS Conference
9 December 2017: Workshop
Call for Papers
We welcome submissions of papers on the topic of disentangled representations . This includes but is not limited to papers on:
- Algorithms for learning disentangled features
- Theoretical understanding of disentangled representations
- Definitions of disentangling
- Connections between disentangling and interpretability
- Applications that use disentangled representations for downstream tasks
- Separating style and content
- Separating syntax and semantics
- Neural and cognitive underpinnings of disentangling in natural intelligence
- Disentangling for control and reinforcement learning
Papers should not have been previously presented at other conferences, but follow-up work, reviews or summaries of prior work, and papers submitted but not yet accepted elsewhere are welcome. Papers should be formatted according to the current NIPS style guide and are required to be under 5 pages in length, including references. All accepted papers will be presented at the poster sessions, and the top submissions will be given spotlight presentations. Submissions should be sent to firstname.lastname@example.org before midnight UTC on 5 November.