An important facet of human experience is our ability to break down what we observe and interact with, along characteristic lines. Visual scenes consist of separate objects, which may have different poses and identities within their category. In natural language, the syntax and semantics of a sentence can often be separated from one another. In planning and cognition plans can be broken down into immediate and long term goals. Inspired by this much research in deep representation learning has gone into finding disentangled factors of variation. However, this research often lacks a clear definition of what disentangling is or much relation to work in other branches of machine learning, neuroscience or cognitive science. In this workshop we intend to bring a wide swathe of scientists studying disentangled representations under one roof to try to come to a unified view of the problem of disentangling.
The workshop will address these issues through 3 focuses: What is disentangling: Are disentangled representations just the same as statistically independent representations, or is there something more? How does disentangling relate to interpretability? Can we define what it means to separate style and content, or is human judgement the final arbiter? Are disentangled representations the same as equivariant representations? How can disentangled representations be discovered: What is the current state of the art in learning disentangled representations? What are the cognitive and neural underpinnings of disentangled representations in animals and humans? Most work in disentangling has focused on perception, but we will encourage dialogue with researchers in natural language processing and reinforcement learning as well as neuroscientists and cognitive scientists. Why do we care about disentangling: What are the downstream tasks that can benefit from using disentangled representations? Does the downstream task define the relevance of the disentanglement to learn? What does disentangling get us in terms of improved prediction or behavior in intelligent agents?
Yoshua Bengio - UMontreal
Finale Doshi-Velez - Harvard
Ahmed Elgammal - Rutgers
Irina Higgins - DeepMind
Pushmeet Kohli - DeepMind
Doina Precup - McGill/DeepMind
Stefano Soatto - UCLA
Doris Tsao - CalTech
This workshop is collocated with NIPS 2017 and will take place on Saturday 9th of December 2017 in Room 203 of the Long Beach Convention Center.
8:30 - 9:00 Set up Posters & Welcome: Josh Tenenbaum
9:00 - 9:30 Stefano Soatto - Emergence of Invariance and Disentangling in Deep Representations
9:30 - 10:00 Irina Higgins - Unsupervised Disentangling or How to Transfer Skills and Imagine Things
10:00 - 10:30 Finale Doshi-Velez - Counterfactually-Faithful Explanation: An Application for Disentangled Representations
10:30 - 11:00 Poster Session & Break
11:00 - 11:30 Doris Tsao - The Neural Code for Visual Objects
11:30 - 12:15 Poster Spotlights (4 minutes each):
Lunch Break 12:15 - 14:00
14:00 - 14:30 Doina Precup - Learning independently controllable features for temporal abstraction
14:30 - 15:00 Pushmeet Kohli - Exploring the different paths to achieving disentangled representations
15:00 - 15:30 Poster Session & Break
15:30 - 16:00 Yoshua Bengio - Priors to help automatically discover and disentangle explanatory factors
16:00 - 16:30 Ahmed Elgammal - Generalized Separation of Style and Content on Manifolds: The role of Homeomorphism
16:30 - 17:00 Final Poster Break
17:00 - 18:00 Panel discussion
5 November 2017: Submission date for papers
13 November 2017: Acceptance notification
24 November 2017: Camera ready due
4–9 December 2017: NIPS Conference
9 December 2017: Workshop
Diane Bouchacourt - Oxford / Facebook AI Research
Emily Denton - New York University
Tejas Kulkarni - Deepmind
Honglak Lee - Google / U. Michigan
Siddharth N - Oxford
David Pfau - DeepMind
Josh Tenenbaum - MIT
If you wish to contact the organizers with any questions about the workshop, please email nips-disentangling@googlegroups.com
We welcome submissions of papers on the topic of disentangled representations . This includes but is not limited to papers on:
Papers should not have been previously presented at other conferences, but follow-up work, reviews or summaries of prior work, and papers submitted but not yet accepted elsewhere are welcome. Papers should be formatted according to the current NIPS style guide and are required to be under 5 pages in length, including references. All accepted papers will be presented at the poster sessions, and the top submissions will be given spotlight presentations. Submissions should be sent to nips-disentangling+submission@googlegroups.com before midnight UTC on 5 November.