Skip to main navigation Skip to search Skip to main content

On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning

  • Fanshuang Kong
  • , Richong Zhang*
  • , Ziqiao Wang
  • , Yongyi Mao
  • *Corresponding author for this work
  • Beihang University
  • Zhongguancun Laboratory
  • University of Ottawa

Research output: Contribution to journalConference articlepeer-review

Abstract

To date, a backbone of methods for unsupervised domain adaptation (UDA) involves learning label-discriminative features via a label classifier and domain-invariant features through a domain discriminator in an adversarial scheme. However, these methods lack explicit control for aligning the source data and target data within the same label class, degrading the classifier’s performance in the target domain. In this paper, we propose PL-Mix, a pseudo label guided Mixup method based on adversarial prompt tuning. Specifically, our PL-Mix facilitates class-dependent alignment and can alleviate the impact of noisy pseudo-labels. We then theoretically justify that PL-Mix can improve the generalization for UDA. Extensive experiments of the comparison with existing models also demonstrate the effectiveness of PL-Mix.

Original languageEnglish
Pages (from-to)18399-18407
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number16
DOIs
StatePublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024

Fingerprint

Dive into the research topics of 'On Unsupervised Domain Adaptation: Pseudo Label Guided Mixup for Adversarial Prompt Tuning'. Together they form a unique fingerprint.

Cite this