TY - GEN
T1 - Federated Bias-Aware Latent Factor Model for Privacy-Preserving Recommendation
AU - Gao, Jun Xiang
AU - Ran, Yixin
AU - Chen, Jia
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - A recommender system (RS) aims to provide users with personalized item recommendations, enhancing their overall experience. Traditional RSs collect and process all user data on a central server. However, this centralized approach raises significant privacy concerns, as it increases the risk of data breaches and privacy leakages, which are becoming increasingly unacceptable to privacy-sensitive users. To address these privacy challenges, federated learning has been integrated into RSs, ensuring that user data remains secure. In centralized RSs, the issue of rating bias is effectively addressed by jointly analyzing all users' raw interaction data. However, this becomes a significant challenge in federated RSs, as raw data is no longer accessible due to privacy-preserving constraints. To overcome this problem, we propose a Federated Bias-Aware Latent Factor (FBALF) model. In FBALF, training bias is explicitly incorporated into every local model's loss function, allowing for the effective elimination of rating bias without compromising data privacy. Extensive experiments conducted on three real-world datasets demonstrate that FBALF achieves significantly higher recommendation accuracy compared to other state-of-the-art federated RSs.
AB - A recommender system (RS) aims to provide users with personalized item recommendations, enhancing their overall experience. Traditional RSs collect and process all user data on a central server. However, this centralized approach raises significant privacy concerns, as it increases the risk of data breaches and privacy leakages, which are becoming increasingly unacceptable to privacy-sensitive users. To address these privacy challenges, federated learning has been integrated into RSs, ensuring that user data remains secure. In centralized RSs, the issue of rating bias is effectively addressed by jointly analyzing all users' raw interaction data. However, this becomes a significant challenge in federated RSs, as raw data is no longer accessible due to privacy-preserving constraints. To overcome this problem, we propose a Federated Bias-Aware Latent Factor (FBALF) model. In FBALF, training bias is explicitly incorporated into every local model's loss function, allowing for the effective elimination of rating bias without compromising data privacy. Extensive experiments conducted on three real-world datasets demonstrate that FBALF achieves significantly higher recommendation accuracy compared to other state-of-the-art federated RSs.
KW - Federated Learning
KW - Privacy Leakage
KW - Rating Bias
KW - Recommender System
UR - https://www.scopus.com/pages/publications/105012096188
U2 - 10.1109/ICAISISAS64483.2025.11051720
DO - 10.1109/ICAISISAS64483.2025.11051720
M3 - 会议稿件
AN - SCOPUS:105012096188
T3 - 2025 Joint International Conference on Automation-Intelligence-Safety, ICAIS 2025 and International Symposium on Autonomous Systems, ISAS 2025
BT - 2025 Joint International Conference on Automation-Intelligence-Safety, ICAIS 2025 and International Symposium on Autonomous Systems, ISAS 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 Joint International Conference on Automation-Intelligence-Safety, ICAIS 2025 and International Symposium on Autonomous Systems, ISAS 2025
Y2 - 23 May 2025 through 25 May 2025
ER -