2021 June 19th

Extreme Vision Modeling workshop


at CVPR 2021

The Speakers

Kristen Graumann

Professor at UT Austin, Research Scientist at Facebook AI Research


Le Qouc

Principle Research Scientist at Google Brain


Chen Sun

Staff Research Scientist at Google, Assistant professor at Brown


Matthijs Douze

Research Scientist at Facebook AI Research


David Novotny

Research Scientist at Facebook AI Research


Mathilde Caron

PhD candidate Inria and Facebook AI Research

Motivation

Motivation: Provide a common forum for both computer vision practitioners in the industry and academia to initiate discussions and propose best ways to build models and datasets that can benefit the computer vision community at large.



Over the last few years extremely large scale datasets are becoming more and more popular in the community. For instance, image models are now being trained on billions of images and video models on millions of videos and have shown significant improvements compared to pre-training on ImageNet, the de facto pre-training dataset in computer vision. Moreover, as our computational abilities grow, so does our ability to train larger models. In the past 10 years, the computational capacity of our best-performing models for image classification has increased to 7 billion FLOPs. This powerful combination of extremely large models trained on extremely large datasets has been a step change and is emerging as a clear winner in most computer vision challenges.
There are numerous research problems in this space that are relevant to the larger vision community and can pave the way for better extreme scale learning. For instance, large datasets often have a skewed distribution of labels with a long tail that prevents us from taking full advantage of huge and diverse label space. Making advances on the other end of extreme vision, i.e., low-shot learning is vital to improve training on such datasets. Similarly, noisy labels are unavoidable in extreme scale datasets. Hence, investing in better weakly supervised learning algorithms is critical for training at such a scale. Further, a common practice is to train and evaluate models either in a completely ``weakly labelled" or ``strongly labelled" setting. We want to spark a discussion towards a more practical middle ground, when a mixture of weak and strong labels are available as is the case in large web datasets.

Organizers

  • Cheng-Yang Fu, Research Scientist at Facebook AI

  • Zhenheng Yang, Research Scientist at Facebook AI

  • Vignesh Ramanathan, Research scientist at Facebook AI

  • Dhruv Mahajan, Research scientist at Facebook AI

  • Laurens van der Maaten, Research scientist at Facebook AI, Former Assistant Professor at Delft University of Technology

  • Alex Berg, Research scientist at Facebook AI, Associate Professor at UNC

  • Ishan Misra, Research Scientist at Facebook AI