TY - GEN
T1 - SSL-DC
T2 - 26th International Conference on Pattern Recognition, ICPR 2022
AU - Yang, Huayi
AU - Wang, Deqing
AU - Zhao, Zhengyang
AU - Wang, Xuying
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Few-shot learning, aiming to distinguish unseen classes by training with few labeled samples, is still challenged by the overfitting problem. The transductive few-shot learning paradigm enables us to reduce overfitting by training a highly discriminative feature representation via self-supervised learning since the entire unlabeled samples are allowed to be accessed. In this paper, we propose a simple but efficient approach based on self-supervised pre-training and nearest class prototype search, which can obtain a significant improvement in the performance of transductive few-shot learning tasks without external samples. However, since the class prototype is obtained through limited support samples, it is easily affected by biased samples. Therefore, we propose to train a conditional generative adversarial network to estimate the distribution of features instead of assuming it follows Gaussian distribution as previous arts. Thus, we can generate features that are closed to real features from the estimated distribution to calibrate the distribution of the class prototype. Finally, more detailed experiments show that our method can exceed plenty of recent transductive few-shot learning methods significantly and achieve 9.83% and 4.38% improvements over the existing best method under the transductive 5-way 1-shot and 5-shot settings with ResNet-12 on the miniImageNet.
AB - Few-shot learning, aiming to distinguish unseen classes by training with few labeled samples, is still challenged by the overfitting problem. The transductive few-shot learning paradigm enables us to reduce overfitting by training a highly discriminative feature representation via self-supervised learning since the entire unlabeled samples are allowed to be accessed. In this paper, we propose a simple but efficient approach based on self-supervised pre-training and nearest class prototype search, which can obtain a significant improvement in the performance of transductive few-shot learning tasks without external samples. However, since the class prototype is obtained through limited support samples, it is easily affected by biased samples. Therefore, we propose to train a conditional generative adversarial network to estimate the distribution of features instead of assuming it follows Gaussian distribution as previous arts. Thus, we can generate features that are closed to real features from the estimated distribution to calibrate the distribution of the class prototype. Finally, more detailed experiments show that our method can exceed plenty of recent transductive few-shot learning methods significantly and achieve 9.83% and 4.38% improvements over the existing best method under the transductive 5-way 1-shot and 5-shot settings with ResNet-12 on the miniImageNet.
UR - https://www.scopus.com/pages/publications/85143618222
U2 - 10.1109/ICPR56361.2022.9956433
DO - 10.1109/ICPR56361.2022.9956433
M3 - 会议稿件
AN - SCOPUS:85143618222
T3 - Proceedings - International Conference on Pattern Recognition
SP - 4892
EP - 4898
BT - 2022 26th International Conference on Pattern Recognition, ICPR 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 August 2022 through 25 August 2022
ER -