Transferring and Adapting Source Knowledge in Computer Vision and VisDA Challenge
In conjunction with ECCV 2020
This is the 7th annual workshop that brings together computer vision researchers interested in domain adaptation and knowledge transfer techniques. Given the last year success, we will keep the Domain Adaptation Challenge.
A key ingredient of the recent successes of computer vision methods is the availability of large sets of annotated data. However, collecting them is prohibitive in many real applications and it is natural to search for alternative source of knowledge that needs to be transferred or adapted to provide sufficient learning support. Our workshop aims to bring together researchers in various sub-areas of Transfer Learning (TL) and Domain Adaptation (DA) for computer vision. Moreover, for this year we propose a new edition of the VisDA challenge that will focus on domain adaptive instance retrieval, where the source and target samples will be drawn from synthetic and real image domains of pedestrians and vehicles.
Session 1: Sunday 23 August - 10:00 - 12:00 UTC+1 Session 2: Sunday 23 August - 20:00 - 22:00 UTC+1
Accordingly, TASK-CV aims to bring together research in transfer learning and domain adaptation for computer vision and invites the submission of research contributions on the following topics:
TL/DA learning methods for challenging paradigms like unsupervised, incremental, open set, universal, online and federated learning.
TL/DA CNN architectures with new adaptation techniques, fine-tuning strategies, regularization approaches, weights transfer solutions etc.
TL/DA focusing on specific computer vision tasks (e.g., image classification, object detection, semantic segmentation, retrieval, tracking, etc.) and applications (biomedical, robotics, multimedia, autonomous driving, etc.).
TL/DA methods working at feature and pixel (generative) level as well as jointly applied with other learning paradigms such as reinforcement learning.
DA in case of sensor differences (e.g., low-vs-high resolution, power spectrum sensitivity, different RGB/Depth modalities) and compression schemes.
Datasets and protocols for evaluating TL/DA methods.
Going beyond TL/DA towards domain generalization.
Multi-Task, Zero- One- and Few-Shot Learning.
This is not a closed list; thus, we welcome other interesting and relevant research for TASK-CV.
Class-imbalanced Domain Adaptation: An Empirical Odyssey. Shuhan Tan, Xingchao Peng, Kate Saenko
Sequential Learning for Domain Generalization. Da Li, Yongxin Yang, Yi-Zhe Song, Timothy Hospedales
Generating Visual and Semantic Explanations with Multi-task Network. Wenjia Xu, Jiuniu Wang, Yang Wang, Yirong Wu, Zeynep Akata
SpotPatch: Parameter-Efficient Transfer Learning for Mobile Object Detection. Keren Ye, Adriana Kovashka, Mark Sandler, Menglong Zhu, Andrew Howard, Marco Fornoni
Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning. Yannick Le Cacheux, Herve Le Borgne, Michel Crucianu
Adversarial Transfer of Pose Estimation Regression. Boris Chidlovskii, Assem Sadek
Disentangled Image Generation for Unsupervised Domain Adaptation. Safa Cicek, Ning Xu, Zhaowen Wang, Hailin Jin, Stefano Soatto
Domain Generalization using Shape Representation. Narges Honarvar Nazari, Adriana Kovashka
Bi-Dimensional Feature Alignment for Cross-Domain Object Detection. Zhen Zhao, Yuhong Guo, Jieping Ye
Bayesian Zero-Shot Learning. Sarkhan Badirli, Zeynep Akata, Murat Dundar
Self-Supervision for 3D Real-World Challenges. Antonio Alliegro, Davide Boscaini, Tatiana Tommasi
Diversified Mutual Learning for Deep Metric Learning. Wonpyo Park, Wonjae Kim, Kihyun You, Minsu Cho
Domain Generalization vs Data Augmentation: an unbiased perspective. Francesco Cappio Borlino, Antonio D'Innocente, Tatiana Tommasi
ACCEPTED 'ONGOING' WORKS
Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings. Viraj Prabhu, Arjun Chandrasekaran, Kate Saenko, Judy Hoffman
Multi-Task Incremental Learning for Object Detection. Xialei Liu, Hao Yang, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto
Bookworm continual learning: beyond zero-shot learning and continual learning. Kai Wang, Luis Herranz, Anjan Dutta, Joost van de Weijer
The Best Paper Award will be announced during the last Live Workhsop Session.