C-SFDA: A Curriculum Learning Aided Self-Training Famework                    for Efficient Source Free Domain Adaptation

                       Nazmul Karim, Niluthpol Chowdhury Mithun, Abhinav Rajvanshi, Han-pang Chiu, Supun Samarasekera, 

                                                                                                      Nazanin Rahnavard

           Department of ECE, UCF, Orlando, FL, USA                             SRI International, Princeton, NJ, USA

                                                                Accepted at CVPR 2023 

                   Paper             Code           Presentation 

                                                                   Abstract

Unsupervised domain adaptation (UDA) approaches focus on adapting models trained on a labeled source domain to an unlabeled target domain. In contrast to UDA, source-free domain adaptation (SFDA) is a more practical setup as access to source data is no longer required during adaptation. Recent state-of-the-art (SOTA) methods on SFDA mostly focus on pseudo-label refinement based self-training which generally suffers from two issues: i) inevitable occurrence of noisy pseudo-labels that could lead to early training time memorization, ii) refinement process requires maintaining a memory bank which creates a significant burden in resource constraint scenarios. To address these concerns, we propose C-SFDA, a curriculum learning aided self-training framework for SFDA that adapts efficiently and reliably to changes across domains based on selective pseudo-labeling. Specifically, we employ a curriculum learning scheme to promote learning from a restricted amount of pseudo labels selected based on their reliabilities. This simple yet effective step successfully prevents label noise propagation during different stages of adaptation and eliminates the need for costly memory-bank based label refinement. Our extensive experimental evaluations on both image recognition and semantic segmentation tasks confirm the effectiveness of our method. C-SFDA is readily applicable to online test-time domain adaptation and also outperforms previous SOTA methods in this task.

            Overview

 Figure 1. Left: In source-free domain adaptation, we only have a source model that needs to be adapted on the target data. Among the source-generated pseudo-labels, a large portion is noisy which is important to avoid during supervised self-training (SST) with regular cross-entropy loss. Instead of using all pseudo-labels, we choose the most reliable ones and effectively propagate high-quality label information to unreliable samples. As the training progresses, the proposed selection strategy tends to choose more samples for SST due to the improved average reliability of pseudo-labels. Such a restricted self-training strategy creates a model with better discrimination ability and eventually corrects the noisy predictions. Here, T is the total number of iterations. Right: While existing SFDA techniques leverages cluster structure knowledge in the feature space, there may exist many misleading neighbors–neighbors’ pseudo labels that are different from the anchors’ true label. Therefore, clustering-based label propagation inevitably suffers from label noise in subsequent training.

                     Summary of Contributions



                          Proposed Method

                                          Performance

                   VISDA-C Dataset 

     Any Queries?

   Contact nazmul.karim18@knights.ucf.edu to get more information on the project