Xuehai He 1, Diji Yang 1, Weixi Feng 2, Tsu-Jui Fu 2, Arjun Akula 3, Varun Jampani 3, Pradyumna Narayana 3, Sugato Basu 3, William Yang Wang 2, Xin Eric Wang 1
1UC Santa Cruz, 2UC Santa Barbara, 3Google
Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and language models such as CLIP. However, existing prompt tuning methods tend to learn spurious or entangled representations, which leads to poor generalization to unseen concepts. Towards non-spurious and efficient prompt learning from limited examples, this paper presents a novel Counterfactual Prompt Learning (CPL) method for vision and language models, which simultaneously employs counterfactual generation and contrastive learning in a joint optimization framework. Particularly, CPL constructs counterfactual by identifying minimal non-spurious feature change between semantically-similar positive and negative samples that causes concept change and learns more generalizable prompt representation from both factual and counterfactual examples via contrastive learning. Extensive experiments demonstrate that CPL can obtain superior few-shot performance on different vision and language tasks than previous prompt tuning methods on CLIP. On image classification, we achieve a 3.55% average relative improvement on unseen classes across seven datasets; on image-text retrieval and visual question answering, we gain up to 4.09% and 25.08% relative improvements across three few-shot scenarios on unseen test sets respectively.
A description of an effort and why it matters
Results
On image classification, CPL method achieves clear advantages over CoCoOp for both seen and
unseen classes across seven datasets
On ITR, CPL significantly outperforms CoCoOp across three different few shot settings.
On VQA, CPL outperforms CoCoOp consistently across three different few shot settings.
For preprocessing, we construct the task-relevant prompts for all training samples.
During training, given a positive image-prompt pair, we first perform text-based negative sampling to find the most semantically-similar negative sample based on text-similarity scores. Then we adopt a controllable counterfactual generation strategy to construct the counterfactual from the positive and negative samples in the visual feature space.
Finally, we perform contrastive learning using both generated counterfactual image features and factual image features in a joint optimization framework to fine-tune the task-agnostic prompt.
Contact Xuehai He with xhe89@ucsc.edu to get more information on the project
@article{he2022cpl,
title={CPL: Counterfactual Prompt Learning for Vision and Language Models},
author={He, Xuehai and Yang, Diji and Feng, Weixi and Fu, Tsu-Jui and Akula, Arjun and Jampani, Varun and Narayana, Pradyumna and Basu, Sugato and Wang, William Yang and Wang, Xin Eric},
journal={arXiv preprint arXiv:2210.10362},
year={2022}
}