TY - GEN
T1 - Mitigate Hallucinations in LLM’s Understanding of Dynamic Graph
T2 - 32nd International Conference on Neural Information Processing, ICONIP 2025
AU - Chen, Yuou
AU - Zhang, Ruixing
AU - Liu, Yunqi
AU - Zhu, Tongyu
AU - Liu, Mingzhe
AU - Lv, Weifeng
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2026.
PY - 2026
Y1 - 2026
N2 - Large Language Models have recently demonstrated remarkable potential in graph-based tasks by integrating natural language understanding with structural reasoning. However, when applied to dynamic graphs—networks that evolve over time, LLMs exhibit a surprising and counterintuitive phenomenon: their accuracy in answering the fundamental question when two nodes are linked is only around 50%. Since recognizing edge existence is one of the most basic graph understanding tasks, this fundamental shortcoming undermines critical graph operations such as triadic closure detection and degree calculation. We define this phenomenon as the hallucination problem in dynamic graphs. In response to these challenges, we introduce two novel mechanisms. First, the Existence Fine-tuning Mechanism explicitly trains LLMs to recognize and retain the persistent presence of edges in dynamic graphs. Second, the Edge Reduction Mechanism enforces a strategy that decomposes complex reasoning tasks into simpler, more reliable edge existence queries. To further enhance reasoning capabilities in smaller LLMs, we propose a Teacher-Forcing Distillation Strategy that leverages high-quality decomposition strategies generated by a larger model. Extensive experiments on multiple models first demonstrate hallucination is common in models around 7B. Further experiments also demonstrate that our methods substantially mitigate hallucinations and achieve state-of-the-art performance. In summary, our findings reveal a critical limitation in current LLMs’ handling of dynamic graphs and provide a robust framework and effective solutions for advancing dynamic graph reasoning capabilities in these models. All the codes can be accessed via this link (https://anonymous.4open.science/r/DyG-Hallucination-6CCB).
AB - Large Language Models have recently demonstrated remarkable potential in graph-based tasks by integrating natural language understanding with structural reasoning. However, when applied to dynamic graphs—networks that evolve over time, LLMs exhibit a surprising and counterintuitive phenomenon: their accuracy in answering the fundamental question when two nodes are linked is only around 50%. Since recognizing edge existence is one of the most basic graph understanding tasks, this fundamental shortcoming undermines critical graph operations such as triadic closure detection and degree calculation. We define this phenomenon as the hallucination problem in dynamic graphs. In response to these challenges, we introduce two novel mechanisms. First, the Existence Fine-tuning Mechanism explicitly trains LLMs to recognize and retain the persistent presence of edges in dynamic graphs. Second, the Edge Reduction Mechanism enforces a strategy that decomposes complex reasoning tasks into simpler, more reliable edge existence queries. To further enhance reasoning capabilities in smaller LLMs, we propose a Teacher-Forcing Distillation Strategy that leverages high-quality decomposition strategies generated by a larger model. Extensive experiments on multiple models first demonstrate hallucination is common in models around 7B. Further experiments also demonstrate that our methods substantially mitigate hallucinations and achieve state-of-the-art performance. In summary, our findings reveal a critical limitation in current LLMs’ handling of dynamic graphs and provide a robust framework and effective solutions for advancing dynamic graph reasoning capabilities in these models. All the codes can be accessed via this link (https://anonymous.4open.science/r/DyG-Hallucination-6CCB).
KW - Dynamic graph
KW - Hallucination
KW - Large Language Model
KW - Prompt
UR - https://www.scopus.com/pages/publications/105022747757
U2 - 10.1007/978-981-95-4088-4_8
DO - 10.1007/978-981-95-4088-4_8
M3 - 会议稿件
AN - SCOPUS:105022747757
SN - 9789819540877
T3 - Communications in Computer and Information Science
SP - 109
EP - 123
BT - Neural Information Processing - 32nd International Conference, ICONIP 2025, Proceedings
A2 - Taniguchi, Tadahiro
A2 - Leung, Chi Sing Andrew
A2 - Kozuno, Tadashi
A2 - Yoshimoto, Junichiro
A2 - Mahmud, Mufti
A2 - Doborjeh, Maryam
A2 - Doya, Kenji
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 20 November 2025 through 24 November 2025
ER -