Abstract
Federated learning provides an efficient integrated model for distributed data, allowing the local training of different data. Meanwhile, the goal of multi-task learning is to simultaneously establish models for multiple related tasks, and to obtain the underlying main structure. However, traditional federated multi-task learning models not only have strict requirements for the data distribution, but also demand large amounts of calculation and have slow convergence, which hindered their promotion in many fields. In our work, we apply the rank constraint on weight vectors of the multi-task learning model to adaptively adjust the task’s similarity learning, according to the distribution of federal node data. The proposed model has a general framework for solving optimal solutions, which can be used to deal with various data types. Experiments show that our model has achieved the best results in different dataset. Notably, our model can still obtain stable results in datasets with large distribution differences. In addition, compared with traditional federated multi-task learning models, our algorithm is able to converge on a local optimal solution within limited training iterations.
| Original language | English |
|---|---|
| Pages (from-to) | 441-452 |
| Number of pages | 12 |
| Journal | Journal of Information Processing Systems |
| Volume | 17 |
| Issue number | 3 |
| DOIs | |
| State | Published - Jun 2021 |
| Externally published | Yes |
Keywords
- Data Distribution
- Federated Multi-Task Learning
- Rank Constraint
- Underlying Structure
Fingerprint
Dive into the research topics of 'A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver