CVPR 2020, SEATTLE
Most of the major advances in Deep Learning have come from supervised learning. Despite these successes, supervised learning algorithms are characterized by a major limitation: they necessitate massive amounts of carefully, and typically expensively, annotated data. This workshop will emphasize future directions beyond supervised learning such as reinforcement learning and weakly supervised learning. Such approaches require far less supervision and allow computers to learn beyond mimicking what is explicitly encoded in a large-scale set of annotations. We want to focus on how Deep Learning algorithms can “scale” from 95% precision to 100% without incurring in prohibitive (or impossible) amounts of data. How do we generalize algorithms to tackle unseen examples and outliers?
We encourage researchers to formulate innovative learning theories, feature representations, and end-to-end vision systems based on deep learning. We also encourage new theories and processes for dealing with large scale image datasets through deep learning architectures.
We are soliciting original contributions that address a wide range of theoretical and practical issues including, but not limited to:
- Learning with limited data, trends and training strategies
- Unsupervised learning
- Transfer learning and domain transfer
- Large scale image and video understanding
- Reinforcement learning
- Unsupervised feature learning and feature selection
- Mid-level representations with deep learning
- Advancements in deep learning
- Domain transfer using deep architectures
- Real time learning applications
- Lifelong learning