Abstract
Noncontrastive self-supervised learning methods offer an effective alternative to contrastive approaches by avoiding the need for negative samples to avoid representation collapse. Noncontrastive learning methods explicitly or implicitly optimize the representation space, yet they often require large representation dimensions, leading to dimensional inefficiency. To provide negative samples, contrastive learning methods often require large batch sizes, thus regarded as sample inefficient, while noncontrastive learning methods require large representation dimensions, thus regarded as dimension inefficient. Although we have some understanding of the noncontrastive learning method, theoretical analysis of such phenomenon still remains largely unexplored. We present a theoretical analysis of the dimensional need for noncontrastive learning. We investigate the transfer between upstream representation learning and downstream tasks' performance, demonstrating how noncontrastive methods implicitly increase interclass distances within the representation space and how the distance affects the model performance of evaluation performance. We prove that the performance of noncontrastive methods is affected by the output dimension and the number of latent classes, and illustrate why performance degrades significantly when the output dimension is substantially smaller than the number of latent classes. We demonstrate our findings through experiments on image classification experiments, and enrich the verification in audio, graph and text modalities. We also perform empirical evaluation for image models on extensive detection and segmentation tasks beyond classification that show satisfactory correspondence to our theorem.
| Original language | English |
|---|---|
| Pages (from-to) | 4089-4102 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Cybernetics |
| Volume | 55 |
| Issue number | 9 |
| DOIs | |
| State | Published - 2025 |
Keywords
- Generalization analysis
- non-contrastive learning
- self-supervised learning
Fingerprint
Dive into the research topics of 'Understanding the Dimensional Need of Noncontrastive Learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver