The Double-Accelerated Stochastic Method for Regularized Empirical Risk Minimization

  • Liu Liu*
  • , Dacheng Tao
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we propose a general scheme for the double-accelerated stochastic method (DASM) to minimize the ill-conditioned regularized empirical risk minimization (ERM) problem, which includes the sum of two convex functions: one is the sum of the smooth component functions, and the other is the block separable structure. We use an inner-outer iteration procedure to accelerate such convex functions, which considers the accelerated randomized proximal coordinate gradient method as the inner procedure and the accelerated proximal point algorithm as the outer procedure. In order to make a connection between the inner and outer procedures, we enhance the dependence on the condition number and redefine the regularization parameter to improve runtime and convergence rates. We also incorporate the non-smooth regularization term into the inner accelerated procedure. Furthermore, we give a theoretical analysis of the iteration complexity and runtime, which is justified in practical applications. We show that the proposed DASM has better or comparable performance than current state-of-the-art accelerated methods for the ill-conditioned ERM problem.

Original languageEnglish
Article number8640089
Pages (from-to)440-451
Number of pages12
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Volume3
Issue number6
DOIs
StatePublished - Dec 2019
Externally publishedYes

Keywords

  • Accelerated stochastic optimization
  • accelerated proximal coordinate gradient
  • ill-condition
  • stochastic dual coordinate ascent

Fingerprint

Dive into the research topics of 'The Double-Accelerated Stochastic Method for Regularized Empirical Risk Minimization'. Together they form a unique fingerprint.

Cite this