Inertial accelerated stochastic mirror descent for large-scale generalized tensor CP decomposition

Research output: Contribution to journalArticlepeer-review

Abstract

The majority of classic tensor CP decomposition models are designed for squared loss, utilizing Euclidean distance as a local proximal term. However, the Euclidean distance is unsuitable for the generalized loss function applicable to diverse types of real-world data, such as integer and binary data. Consequently, algorithms developed under the squared loss are not easily adaptable to handle these generalized losses, partially due to the absence of the gradient Lipschitz continuity. This paper explores generalized tensor CP decomposition, employing the Bregman distance as the proximal term and introducing an inertial accelerated block randomized stochastic mirror descent algorithm (iTableSMD). Within a broader multi-block variance reduction and inertial acceleration framework, we demonstrate the sublinear convergence rate for the subsequential sequence produced by the iTableSMD algorithm. We further show that iTableSMD requires at most O(ε-2) iterations in expectation to attain an ε-stationary point and establish the global convergence of the sequence. Numerical experiments on real datasets demonstrate that our proposed algorithm is efficient and achieves better performance than the existing state-of-the-art methods.

Original languageEnglish
Article number114948
Pages (from-to)201-233
Number of pages33
JournalComputational Optimization and Applications
Volume91
Issue number1
DOIs
StatePublished - May 2025

Keywords

  • Bregman divergence
  • Generalized tensor CP decomposition
  • Inertial acceleration
  • Non-lipschitz gradient continuity
  • Stochastic mirror descent
  • Variance reduction

Fingerprint

Dive into the research topics of 'Inertial accelerated stochastic mirror descent for large-scale generalized tensor CP decomposition'. Together they form a unique fingerprint.

Cite this