Adversarial Continuous Learning on Unsupervised Domain Adaptation

In this paper,

  • We propose a novel method: adversarial continuous learning in unsupervised domain adaptation (ACDA). The proposed ACDA model adversarially learns high confidence examples from the target domain and confuses the domain discriminator;

  • We are the first to propose a deep correlation loss to help ensure that predictions are locally consistent (with those of nearby examples);

  • To better represent the learned features and train a robust classifier, we dynamically align both marginal and conditional distributions of source and target domains in a two-level domain alignment setting.

The scheme of our proposed adversarial continuous learning in unsupervised domain adaptation (ACDA) model. It combines continuous learning and adversarial learning in a two-round classification framework. Initially, the shared encoder will learn a mapping from source images to target images and fool the domain discriminator (which attempts to distinguish examples from source versus target domains). In the second round, the shared encoder will be trained with a new training set which contains the original source images and confidence transfer examples from the target domain, resulting in an improved mapping. The yellow circle marks confident transfer examples.

Model

The architecture of our proposed ACDA model. We first extract features from both source and target domains via $G$. In first round classification, the shared encoder layers are trained with examples from the labeled source domain and the unlabeled target domain. In the second round, the shared encoder layers are trained with additional high-confidence transfer examples with labels from the first round classifier in the target domain. The domain alignment loss reduces the difference of the marginal and conditional distributions between source and target domains. The red outline highlights the shared layers for both classifier $f$ and domain discriminator $D$. $\mathcal{Y_S}_{pred}$ and $\mathcal{Y_T}_{pred}$ are the predicted labels of the source and target domains after performing domain distribution alignment.

Results

Conclusion

We propose novel adversarial continuous learning in unsupervised domain adaptation (termed ACDA) to overcome limitations in generating proper transfer examples and aligning the joint distributions of two domains by minimizing five loss functions. The generated transfer examples can help to further learn the domain invariant of the two domains. As a component of our ACDA model, explicit domain-invariant features are learned through such a cross-domain training scheme. Experiments on three benchmark datasets show the robustness of our proposed ACDA model.