TY - GEN
T1 - Reinforcement Learning-Based Explainable Recommendation over Knowledge Graphs with Negative Sampling
AU - Zhang, Siyuan
AU - Ouyang, Yuanxin
AU - Liu, Zhuang
AU - Rong, Wenge
AU - Xiong, Zhang
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Introducing knowledge graphs (KGs) into the recommender systems not only improves their performance but also enhances the interpretability. However, most KG-based recommendation methods have the problem of inefficiency and ex-post explanation, which reinforcement learning (RL) methods can solve properly. Most existing RL-based methods for explainable recommendations only consider positive rewards when designing the reward part of the RL environment, which is defective and misleads the policy of the RL agent. To address this problem, we propose Reinforced Knowledge Graph Reasoning with Reinforced Negative Sampling (RKGR-RNS) by introducing a negative sampling method into RL-based recommendation, which refines the reward mechanism to help optimize the agent's policy. And a judge module is proposed to improve the performance of the recommender system further. Experiments on three real datasets demonstrate that our method is better than the state-of-the-art baseline.
AB - Introducing knowledge graphs (KGs) into the recommender systems not only improves their performance but also enhances the interpretability. However, most KG-based recommendation methods have the problem of inefficiency and ex-post explanation, which reinforcement learning (RL) methods can solve properly. Most existing RL-based methods for explainable recommendations only consider positive rewards when designing the reward part of the RL environment, which is defective and misleads the policy of the RL agent. To address this problem, we propose Reinforced Knowledge Graph Reasoning with Reinforced Negative Sampling (RKGR-RNS) by introducing a negative sampling method into RL-based recommendation, which refines the reward mechanism to help optimize the agent's policy. And a judge module is proposed to improve the performance of the recommender system further. Experiments on three real datasets demonstrate that our method is better than the state-of-the-art baseline.
KW - explainable recommendation
KW - knowledge graph
KW - negative sampling
KW - reinforcement learning
UR - https://www.scopus.com/pages/publications/85168138494
U2 - 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00282
DO - 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00282
M3 - 会议稿件
AN - SCOPUS:85168138494
T3 - Proceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
SP - 1948
EP - 1953
BT - Proceedings - 2022 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE SmartWorld, 19th IEEE International Conference on Ubiquitous Intelligence and Computing, 2022 IEEE International Conference on Autonomous and Trusted Vehicles Conference, 22nd IEEE International Conference on Scalable Computing and Communications, 2022 IEEE International Conference on Digital Twin, 8th IEEE International Conference on Privacy Computing and 2022 IEEE International Conference on Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PriComp/Metaverse 2022
Y2 - 15 December 2022 through 18 December 2022
ER -