Learning rates for l1 -regularized kernel classifiers

  • Hongzhi Tong*
  • , Di Rong Chen
  • , Fenghong Yang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We consider a family of classification algorithms generated from a regularization kernel scheme associated with l1-regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.

Original languageEnglish
Article number496282
JournalJournal of Applied Mathematics
Volume2013
DOIs
StatePublished - 2013

Fingerprint

Dive into the research topics of 'Learning rates for l1 -regularized kernel classifiers'. Together they form a unique fingerprint.

Cite this