Skip to main navigation Skip to search Skip to main content

基于谣言传播机制的大模型幻觉减轻策略

Translated title of the contribution: Large language model hallucination reduction strategy based on the rumor propagation mechanism
  • Jing He
  • , Yiran Chen
  • , Tianyu Dai
  • Beihang University
  • Nanchang University

Research output: Contribution to journalArticlepeer-review

Abstract

[Objective] The application of artificial intelligence models across various scenarios is rapidly expanding. However, their hallucination problem poses risks by increasing misleading outputs and reducing user trust in practical applications. This study aims to explore the propagation and control mechanisms of hallucinations in large language models (LLMs) and assess their impact on decision-making processes. [Methods] The SEIR rumor propagation dynamics model was used to test the susceptibility of three LLMs, namely GPT-4-Turbo, Claude-3, and Llama-3, using tasks involving legal text judgments. The baseline hallucination levels of these models in legal text analysis were established to evaluate their sensitivity and explore their propagation and control mechanisms. Given the high accuracy and expert judgment required in this area, the study used 1 500 real-world legal case texts and verdict results provided by the China Association of Artificial Intelligence and Legal Challenge. Through fuzzy and accurate testing, the analysis revealed the inherent hallucination tendencies of the models. Control experiments were designed to analyze the sensitivity of LLMs to disruptive text in legal analysis before and after content generation. The study also investigated how interacting with high-hallucination-rate models impacts susceptibility models, potentially increasing their hallucination rates. To mitigate this, knowledge injection and prompt fine-tuning strategies were applied, simulating recovery processes through differential equations. The change in hallucination rates after multiple intervention rounds was calculated. Furthermore, this study evaluated the possibility of secondary hallucination infections when corrected hallucination models were exposed to new information. [Results] The three effectively identified basic case information and key features but showed randomness and inaccuracies in quantifying penalties in specific legal cases. They were significantly impacted by disruptive texts, highlighting the need for stronger mechanisms to enhance the robustness and accuracy of the models in handling misleading information. The interaction between different LLMs increased the hallucination rates of susceptible models. Knowledge injection and prompt fin-tuning strategies effectively reduce hallucination rates. Knowledge injection displays notable impact during the initial intervention round, while prompt fine-tuning provides gradual improvements over consecutive rounds. However, after intervention, the model becomes more sensitive to new information, making it more susceptible to interference. Once exposed to new information, its hallucination rate rapidly increases. [Conclusions] This study systematically investigated LLM transformations between different states. External intervention strategies such as knowledge injection and prompt word fine-tuning were found to significantly reduce hallucination rates in short-term tasks. However, ensuring the long-term health and robustness of these models requires further research and optimization. Future efforts should focus on developing comprehensive management mechanisms to support the long-term stability and reliability of LLMs across diverse applications. By offering a new perspective on hallucination phenomena in LLMs, this study establishes a scientific foundation for developing more effective intervention methods, which are crucial for maintaining user trust and improving the quality of artificial intelligence decision-making.

Translated title of the contributionLarge language model hallucination reduction strategy based on the rumor propagation mechanism
Original languageChinese (Traditional)
Pages (from-to)96-103
Number of pages8
JournalExperimental Technology and Management
Volume42
Issue number2
DOIs
StatePublished - Feb 2025

Fingerprint

Dive into the research topics of 'Large language model hallucination reduction strategy based on the rumor propagation mechanism'. Together they form a unique fingerprint.

Cite this