TY - GEN
T1 - Towards Robust False Information Detection on Social Networks with Contrastive Learning
AU - Ma, Guanghui
AU - Hu, Chunming
AU - Ge, Ling
AU - Chen, Junfan
AU - Zhang, Hong
AU - Zhang, Richong
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/10/17
Y1 - 2022/10/17
N2 - Constructing a robust conversation graph based false information detection model is crucial for real social platforms. Recently, graph neural network (GNN) methods for false information detection have achieved significant advances. However, we empirically find that slight perturbations in the conversation graph can cause the predictions of existing models to collapse. To address this problem, we present RDCL, a contrastive learning framework for false information detection on social networks, to obtain robust detection results. RDCL leverages contrastive learning to maximize the consistency between perturbed graphs from the same original graph and minimize the distance between perturbed and original graphs from the same class, forcing the model to improve resistance to data perturbations. Moreover, we prove the importance of hard positive samples for contrastive learning and propose a hard positive sample pairs generation method (HPG) for conversation graphs, which can generate stronger gradient signals to improve the contrastive learning effect and make the model more robust. Experiments on various GNN encoders and datasets show that RDCL outperforms the current state-of-the-art models.
AB - Constructing a robust conversation graph based false information detection model is crucial for real social platforms. Recently, graph neural network (GNN) methods for false information detection have achieved significant advances. However, we empirically find that slight perturbations in the conversation graph can cause the predictions of existing models to collapse. To address this problem, we present RDCL, a contrastive learning framework for false information detection on social networks, to obtain robust detection results. RDCL leverages contrastive learning to maximize the consistency between perturbed graphs from the same original graph and minimize the distance between perturbed and original graphs from the same class, forcing the model to improve resistance to data perturbations. Moreover, we prove the importance of hard positive samples for contrastive learning and propose a hard positive sample pairs generation method (HPG) for conversation graphs, which can generate stronger gradient signals to improve the contrastive learning effect and make the model more robust. Experiments on various GNN encoders and datasets show that RDCL outperforms the current state-of-the-art models.
KW - contrastive learning
KW - false information detection
KW - graph neural networks
KW - robustness
KW - social networks
UR - https://www.scopus.com/pages/publications/85140915877
U2 - 10.1145/3511808.3557477
DO - 10.1145/3511808.3557477
M3 - 会议稿件
AN - SCOPUS:85140915877
T3 - International Conference on Information and Knowledge Management, Proceedings
SP - 1441
EP - 1450
BT - CIKM 2022 - Proceedings of the 31st ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery
T2 - 31st ACM International Conference on Information and Knowledge Management, CIKM 2022
Y2 - 17 October 2022 through 21 October 2022
ER -