Skip to main navigation Skip to search Skip to main content

Mitigate Hallucinations in LLM’s Understanding of Dynamic Graph: An In-Depth Evaluation and Enhancement

  • Yuou Chen
  • , Ruixing Zhang
  • , Yunqi Liu
  • , Tongyu Zhu
  • , Mingzhe Liu*
  • , Weifeng Lv
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Large Language Models have recently demonstrated remarkable potential in graph-based tasks by integrating natural language understanding with structural reasoning. However, when applied to dynamic graphs—networks that evolve over time, LLMs exhibit a surprising and counterintuitive phenomenon: their accuracy in answering the fundamental question when two nodes are linked is only around 50%. Since recognizing edge existence is one of the most basic graph understanding tasks, this fundamental shortcoming undermines critical graph operations such as triadic closure detection and degree calculation. We define this phenomenon as the hallucination problem in dynamic graphs. In response to these challenges, we introduce two novel mechanisms. First, the Existence Fine-tuning Mechanism explicitly trains LLMs to recognize and retain the persistent presence of edges in dynamic graphs. Second, the Edge Reduction Mechanism enforces a strategy that decomposes complex reasoning tasks into simpler, more reliable edge existence queries. To further enhance reasoning capabilities in smaller LLMs, we propose a Teacher-Forcing Distillation Strategy that leverages high-quality decomposition strategies generated by a larger model. Extensive experiments on multiple models first demonstrate hallucination is common in models around 7B. Further experiments also demonstrate that our methods substantially mitigate hallucinations and achieve state-of-the-art performance. In summary, our findings reveal a critical limitation in current LLMs’ handling of dynamic graphs and provide a robust framework and effective solutions for advancing dynamic graph reasoning capabilities in these models. All the codes can be accessed via this link (https://anonymous.4open.science/r/DyG-Hallucination-6CCB).

Original languageEnglish
Title of host publicationNeural Information Processing - 32nd International Conference, ICONIP 2025, Proceedings
EditorsTadahiro Taniguchi, Chi Sing Andrew Leung, Tadashi Kozuno, Junichiro Yoshimoto, Mufti Mahmud, Maryam Doborjeh, Kenji Doya
PublisherSpringer Science and Business Media Deutschland GmbH
Pages109-123
Number of pages15
ISBN (Print)9789819540877
DOIs
StatePublished - 2026
Event32nd International Conference on Neural Information Processing, ICONIP 2025 - Okinawa, Japan
Duration: 20 Nov 202524 Nov 2025

Publication series

NameCommunications in Computer and Information Science
Volume2753 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference32nd International Conference on Neural Information Processing, ICONIP 2025
Country/TerritoryJapan
CityOkinawa
Period20/11/2524/11/25

Keywords

  • Dynamic graph
  • Hallucination
  • Large Language Model
  • Prompt

Fingerprint

Dive into the research topics of 'Mitigate Hallucinations in LLM’s Understanding of Dynamic Graph: An In-Depth Evaluation and Enhancement'. Together they form a unique fingerprint.

Cite this