DueT: Image-Text Contrastive Transfer Learning with Dual-adapter Tuning
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023) · December 2023
Vision-language models like CLIP show strong zero-shot performance but struggle when fine-tuned on downstream tasks due to overfitting. This paper proposes DueT (Dual-adapter Tuning), which uses separate adapters for uni-modal and cross-modal features to prevent overfitting while maintaining the pre-trained knowledge. The method introduces contrastive learning between adapted and original features, achieving state-of-the-art results on multiple vision-language benchmarks. DueT demonstrates significant improvements over existing adapter-based methods, particularly in few-shot scenarios where overfitting is most problematic.
BibTeX
@inproceedings{hasegawa2023duet,
title = {DueT: Image-Text Contrastive Transfer Learning with Dual-adapter Tuning},
author = {Taku Hasegawa and Kyosuke Nishida and Koki Maeda and Kuniko Saito},
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)},
pages = {13607--13624},
year = {2023},
address = {Singapore},
publisher = {Association for Computational Linguistics}
}
Abstract
This paper presents DueT, a novel transfer learning method for vision and language models built by contrastive learning. In DueT, adapters are inserted into the image and text encoders, which have been initialized using models pre-trained on uni-modal corpora and then frozen. By training only these adapters, DueT enables efficient learning with a reduced number of trainable parameters. Moreover, unlike traditional adapters, those in DueT are equipped with a gating mechanism, enabling effective transfer and connection of knowledge acquired from pre-trained uni-modal encoders while preventing catastrophic forgetting. We report that DueT outperformed simple fine-tuning, the conventional method fixing only the image encoder and training only the text encoder, and the LoRA-based adapter method in accuracy and parameter efficiency for 0-shot image and text retrieval in both English and Japanese domains.