A general aggregation federated learning intervention algorithm based on do-calculus

Research output: Contribution to journalArticlepeer-review

Abstract

This article explores federated long-tail learning (Fed-LT), where clients hold private, heterogeneous data that collectively form a global long-tail distribution. We propose two methods: (a) Client Re-weighted Prior Analyzer (CRePA), which balances the global model's performance on tail and non-tail categories and enhances performance on tail categories while maintaining it on non-tail categories. (b) Federated Long-Tail Causal Intervention Model (FedLT-CI) computes clients’ causal effects on the global model's performance in the tail and enhances the interpretability of Fed-LT. Extensive experiments on the CIFAR-10-LT and CIFAR-100-LT datasets demonstrate the following: (1) CRePA outperforms other baselines, achieving state-of-the-art (SOTA) performance. In scenarios with high heterogeneity and severe long-tail distributions, CRePA improves tail performance by 6.3 % and 5 % compared to CReFF and FedGrab, respectively. (2) FedLT-CI, by intervening during the aggregation process in federated learning (FL), effectively enhances the tail performance of baselines while maintaining stable non-tail performance. For instance, on CIFAR-10-LT under a severe imbalance setting (α=0.1, IFG=100), applying the intervention strategy to the FedAvg, FedGrab, and CRePA models improves tail performance by 4.5 %, 2.1 %, and 1.9 %.

Original languageEnglish
Article number112210
JournalPattern Recognition
Volume171
DOIs
StatePublished - Mar 2026

Keywords

  • Causal intervention
  • Heterogeneous data
  • Long-tail learning

Fingerprint

Dive into the research topics of 'A general aggregation federated learning intervention algorithm based on do-calculus'. Together they form a unique fingerprint.

Cite this