Skip to main navigation Skip to search Skip to main content

A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis

  • Shengbin Wu
  • , Yibai Wang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Federated learning provides an efficient integrated model for distributed data, allowing the local training of different data. Meanwhile, the goal of multi-task learning is to simultaneously establish models for multiple related tasks, and to obtain the underlying main structure. However, traditional federated multi-task learning models not only have strict requirements for the data distribution, but also demand large amounts of calculation and have slow convergence, which hindered their promotion in many fields. In our work, we apply the rank constraint on weight vectors of the multi-task learning model to adaptively adjust the task’s similarity learning, according to the distribution of federal node data. The proposed model has a general framework for solving optimal solutions, which can be used to deal with various data types. Experiments show that our model has achieved the best results in different dataset. Notably, our model can still obtain stable results in datasets with large distribution differences. In addition, compared with traditional federated multi-task learning models, our algorithm is able to converge on a local optimal solution within limited training iterations.

Original languageEnglish
Pages (from-to)441-452
Number of pages12
JournalJournal of Information Processing Systems
Volume17
Issue number3
DOIs
StatePublished - Jun 2021
Externally publishedYes

Keywords

  • Data Distribution
  • Federated Multi-Task Learning
  • Rank Constraint
  • Underlying Structure

Fingerprint

Dive into the research topics of 'A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis'. Together they form a unique fingerprint.

Cite this