Skip to main navigation Skip to search Skip to main content

On the Benefits of Two Dimensional Metric Learning

  • Di Wu
  • , Fan Zhou
  • , Boyu Wang*
  • , Qicheng Lao
  • , Chi Man Wong
  • , Changjian Shui
  • , Yuan Zhou
  • , Feng Wan
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we study two dimensional metric learning (2DML) for matrix data from both theoretical and algorithmic perspectives. We first investigate the generalization bounds of 2DML based on the notion of Rademacher complexity, which theoretically justifies the benefits of learning from matrices directly. Furthermore, we present a novel boosting-based algorithm that scales well with the feature dimension. Finally, we introduce an efficient rank-one correction algorithm, which is tailored to our boosting learning procedure to produce a low-rank solution to 2DML. As our algorithm works directly on the data in matrix representation, it scales well with the feature dimension, keeps the structure and dependence in the data, and has a more compact structure and much fewer parameters to optimize. Extensive evaluations on several benchmark data sets also empirically verify the effectiveness and efficiency of our algorithm.

Original languageEnglish
Pages (from-to)1909-1921
Number of pages13
JournalIEEE Transactions on Knowledge and Data Engineering
Volume35
Issue number2
DOIs
StatePublished - 1 Feb 2023
Externally publishedYes

Keywords

  • Rademacher complexity
  • Two dimensional learning
  • boosting
  • low-rank matrices
  • metric learning

Fingerprint

Dive into the research topics of 'On the Benefits of Two Dimensional Metric Learning'. Together they form a unique fingerprint.

Cite this