Overview
Big data has driven a revolution to many domains of machine learning thanks to modern high-capacity models, but the standard approaches -- supervised learning from labels, or reinforcement learning from a reward function -- have become a bottleneck. Even when data is abundant, getting the labels or rewards that specify exactly what the model must do is often intractable. Collecting simple category labels for classification is prohibitive for millions of billions of examples, and structured outputs (scene interpretations, interactions, demonstrations) are far worse, especially when the data distribution is non-stationary.
Self-supervised learning is a promising alternative where proxy tasks are developed that allow models and agents to learn without explicit supervision in a way that helps with downstream performance on tasks of interest. One of the major benefits of self-supervised learning is increasing data efficiency: achieving comparable or better performance with less labeled data or fewer environment steps (in Reinforcement learning / Robotics).
The field of self-supervised learning (SSL) is rapidly evolving, and the performance of these methods is creeping closer to the fully supervised approaches. However, many of these methods are still developed in domain-specific sub-communities, such as Vision, RL and NLP, even though many similarities exist between them. While SSL is an emerging topic and there is great interest in these techniques, there are currently few workshops, tutorials or other scientific events dedicated to this topic.
This workshop aims to bring together experts with different backgrounds and applications areas to share inter-domain ideas and increase cross-pollination, tackle current shortcomings and explore new directions. The focus will be on the machine learning point of view rather than the domain side.
Speakers
Dates
Submission deadline: May 6, 2019 (Any time)Notifications: May 18, 2019Camera Ready: June 4, 2019 (Any time)- Workshop: June 15, 2019
Call For Papers
The extended abstracts should be maximum 4 pages long (excluding references or appendix). The submission should be in pdf format and should follow the style guidelines for ICML 2019 (found here), but does not have to be anonymized. The work should be submitted via email to selfsupervised.icml2019@gmail.com on or before May 6th 2019 (Anywhere on Earth). Submissions will be reviewed by the organizers, with decisions during the week of May 13 2019. The submissions shouldn't have been previously published or have appeared in the ICML main conference, but work currently under submission to another conference is welcome. There will be no formal publication of workshop proceedings. However, the accepted papers will be made available online in the workshop website.
We welcome submissions on any form of self-supervised learning, including but not limited to:
- Self-supervised learning methods for Vision, Audio, Video, NLP, Robotics, RL, …
- Multi-modal and cross-modal learning
- Evaluation of SSL tasks in semi-supervised learning settings
- Visualizing and analysis of representations learned with SSL
- Theory on SSL loss functions
- Meta-learning of SSL tasks
- Self-supervised domain adaptation
Note that works on unsupervised learning with generative models will be considered, but this is not the main focus of the workshop.
Schedule
Subject to change
08:50 - 09:00 - Opening remarks
09:00 - 09:30 - Jacob Devlin
09:30 - 10:00 - Alison Gopnik
10:00 - 10:30 - Contributed Talks
- 10:00 - 10:15 - Learning Latent Plans from Play - Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet
- 10:15 - 10:30 - Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty - Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, Dawn Song
10:30 - 11:30 - First poster session + Coffee break
11:30 - 12:00 - Chelsea Finn
12:00 - 14:00 - Lunch Break
14:00 - 14:30 - Yann Lecun (TBC)
14:30 - 15:00 - Contributed Talks
- 14:30 - 14:45 - Revisiting Self-Supervised Visual Representation Learning - Alexander Kolesnikov, Xiaohua Zhai, Lucas Beyer
- 14:45 - 15:00 - Data-Efficient Image Recognition with Contrastive Predictive Coding - Olivier J. Henaff, Ali Razavi, Carl Doersch, S. M. Ali Eslami, Aaron van den Oord
15:00 - 16:00 - Second poster session + Coffee Break
16:00 - 16:30 - Andrew Zisserman
16:30 - 17:00 - Abhinav Gupta
17:00 - 17:30 - Alexei Efros
Organizers
Accepted Papers
First Poster Session: 10:30 - 11:30
- Self-supervised learning of inverse problem solvers in medical imaging - Ortal Senouf, Sanketh Vedula, Tomer Weiss, Alex Bronstein, Oleg Michailovich, Michael Zibulevsky
- Multi-modal Self-Supervised Learning for Human Activity Recognition - Aaqib Saeed, Tanir Ozcelebi, Johan Lukkien
- Unsupervised and interpretable scene discovery with Discrete-Attend-Infer-Repeat - Duo Wang, Mateja Jamnik, Pietro Lio
- Paired Cell Inpainting: Self-Supervised Multiple-Instance Learning for Bioimage Analysis - Alex X. Lu, Amy X. Lu, Oren Z. Kraus, Sam Cooper, Wiebke Schormann, David W. Andrews, Alan M. Moses
- Greedy InfoMax for Self-Supervised Representation Learning - Sindy Lowe, Peter O’Connor, Bastiaan S. Veeling
- Self-supervised audio representation learning based on temporal context - M. Tagliasacchi, B. Gfeller, D. Roblek
- Contrastive Predictive Coding for Video Representation Learning - Guillaume Lorre, Jaonary Rabarisoa, Astrid Orcesi, Samia Ainouz, Stphane Canu
- PackNet-SfM: 3D Packing for Self-Supervised Monocular Depth Estimation - Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Adrien Gaidon
- End-to-End Robotic Reinforcement Learning without External Rewards - Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, Sergey Levine
- Learning Representations by Maximizing Mutual Information Across Views - Philip Bachman, R Devon Hjelm, William Buchwalter
- Unsupervised Visuomotor Control through Distributional Planning Networks - Tianhe Yu, Gleb Shevchuk, Dorsa Sadigh, Chelsea Finn
- Supervise Thyself: Examining Self-Supervised Representations in Interactive Environments - Evan Racah, Chris Pal
- Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty - Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, Dawn Song
- Skew-Fit: State-Covering Self-Supervised Reinforcement Learning - Vitchyr H. Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, Sergey Levine
- Contrastive Multiview Coding - Yonglong Tian, Dilip Krishnan, Phillip Isola
- Goal-conditioned Imitation Learning - Yiming Ding, Carlos Florensa, Mariano Phielipp, Pieter Abbeel
Second Poster Session: 15:00 - 16:00
- Data-Efficient Image Recognition with Contrastive Predictive Coding - Olivier J. Henaff, Ali Razavi, Carl Doersch, S. M. Ali Eslami, Aaron van den Oord
- Planning to Explore Visual Environments without Rewards - Danijar Hafner, Jimmy Ba, Mohammad Norouzi, Timothy Lillicrap
- TC-Net: Self-Supervised Monocular Video Scene Understanding Using Temporally Consistent Geometric Prior - Iaroslav Melekhov, Esa Rahtu, Juho Kannala, Alex Kendall
- Learning to Explore via Disagreement - Deepak Pathak, Dhiraj Gandhi, Abhinav Gupta
- Revisiting Self-Supervised Visual Representation Learning - Alexander Kolesnikov, Xiaohua Zhai, Lucas Beyer
- S4L: Self-Supervised Semi-Supervised Learning - Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, Lucas Beyer
- Information-Bottleneck Approach to Self-Attention - Andrey Zhmoginov, Ian Fischer, Mark Sandler
- Temporal Cycle-Consistency Learning - Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, Andrew Zisserman
- Batch weight for domain adaptation with mass shift - Mikołaj Binkowski, R Devon Hjelm, Aaron Courville
- Option Discovery by Aiming to Predict - Veronica Chelu, Doina Precup
- Learning Latent Plans from Play - Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet
- Semi-supervised Learning of Representation for Video Summarization - Ke Zhang, Bowen Zhang, Hexiang Hu, Fei Sha
- Learning Exploration Policies for Model-Agnostic Meta-Reinforcement Learning - Swaminathan Gurumurthy, Sumit Kumar, Katia Sycara
- Perceptual Values from Observation - Ashley D. Edwards, Charles L. Isbell
- Semi-supervised Skill Discovery via Dynamical Distance Learning - Kristian Hartikainen, Tuomas Haarnoja, Xinyang Geng, Sergey Levine
- Unsupervised State Representation Learning in Atari - Ankesh Anand, Evan Racah, Sherjil Ozair, Yoshua Bengio, Marc-Alexandre Côté, R Devon Hjelm