Self-Supervised Learning

Workshop - ICML 2019

selfsupervised.icml2019@gmail.com

Overview

Big data has driven a revolution to many domains of machine learning thanks to modern high-capacity models, but the standard approaches -- supervised learning from labels, or reinforcement learning from a reward function -- have become a bottleneck. Even when data is abundant, getting the labels or rewards that specify exactly what the model must do is often intractable. Collecting simple category labels for classification is prohibitive for millions of billions of examples, and structured outputs (scene interpretations, interactions, demonstrations) are far worse, especially when the data distribution is non-stationary.

Self-supervised learning is a promising alternative where proxy tasks are developed that allow models and agents to learn without explicit supervision in a way that helps with downstream performance on tasks of interest. One of the major benefits of self-supervised learning is increasing data efficiency: achieving comparable or better performance with less labeled data or fewer environment steps (in Reinforcement learning / Robotics).

The field of self-supervised learning (SSL) is rapidly evolving, and the performance of these methods is creeping closer to the fully supervised approaches. However, many of these methods are still developed in domain-specific sub-communities, such as Vision, RL and NLP, even though many similarities exist between them. While SSL is an emerging topic and there is great interest in these techniques, there are currently few workshops, tutorials or other scientific events dedicated to this topic.

This workshop aims to bring together experts with different backgrounds and applications areas to share inter-domain ideas and increase cross-pollination, tackle current shortcomings and explore new directions. The focus will be on the machine learning point of view rather than the domain side.

Speakers

Yann Lecun Facebook AI Research
Chelsea FinnStanford, Google, UC Berkeley
Andrew ZissermanUniversity of Oxford, DeepMind
Alexei EfrosUC Berkeley
Abhinav GuptaCarnegie Mellon University,Facebook AI Research

Dates

  • Submission deadline: May 6, 2019 (Any time)
  • Notifications: May 18, 2019
  • Camera Ready: May 31, 2019 (Any time)
  • Workshop: June 14 or 15, 2019

Call For Papers

The extended abstracts should be maximum 4 pages long (excluding references or appendix). The submission should be in pdf format and should follow the style guidelines for ICML 2019 (found here), but does not have to be anonymized. The work should be submitted via email to selfsupervised.icml2019@gmail.com on or before May 6th 2019 (Anywhere on Earth). Submissions will be reviewed by the organizers, with decisions during the week of May 13 2019. The submissions shouldn't have been previously published or have appeared in the ICML main conference, but work currently under submission to another conference is welcome. There will be no formal publication of workshop proceedings. However, the accepted papers will be made available online in the workshop website.

We welcome submissions on any form of self-supervised learning, including but not limited to:

  • Self-supervised learning methods for Vision, Audio, Video, NLP, Robotics, RL, …
  • Multi-modal and cross-modal learning
  • Evaluation of SSL tasks in semi-supervised learning settings
  • Visualizing and analysis of representations learned with SSL
  • Theory on SSL loss functions
  • Meta-learning of SSL tasks
  • Self-supervised domain adaptation

Note that works on unsupervised learning with generative models will be considered, but this is not the main focus of the workshop.

Schedule

To be announced

Organizers

Carl VondrickColumbia University, Google
Amir ZamirStanford, UC Berkeley
Pieter AbbeelUC Berkeley