Skip to main navigation Skip to search Skip to main content

Distributed Online Optimization via Kernel Reproduced Gradient Descent

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper focuses on a stochastic optimization problem in distributed online learning. To accelerate the op-timization in a distributed framework, a reproducing gradient method is proposed based on kernel learning. Considering the practicality of distributed settings, this method utilizes random Fourier features for gradient learning, which enables privacy protection as model parameters and gradients do not need to be transmitted over the network. Further, a distributed online reproduced gradient descent algorithm is proposed, and the convergence rate of the algorithm is provided. Numerical results show that our method outperforms the existing distributed stochastic gradient descent based on gradient tracking.

Original languageEnglish
Title of host publication2023 International Annual Conference on Complex Systems and Intelligent Science, CSIS-IAC 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages358-362
Number of pages5
ISBN (Electronic)9798350309003
DOIs
StatePublished - 2023
Event2023 International Annual Conference on Complex Systems and Intelligent Science, CSIS-IAC 2023 - Shenzhen, China
Duration: 20 Oct 202322 Oct 2023

Publication series

Name2023 International Annual Conference on Complex Systems and Intelligent Science, CSIS-IAC 2023

Conference

Conference2023 International Annual Conference on Complex Systems and Intelligent Science, CSIS-IAC 2023
Country/TerritoryChina
CityShenzhen
Period20/10/2322/10/23

Keywords

  • distributed online learning
  • kernel learning
  • privacy protection
  • random Fourier features

Fingerprint

Dive into the research topics of 'Distributed Online Optimization via Kernel Reproduced Gradient Descent'. Together they form a unique fingerprint.

Cite this