Hallucination as Disinformation: The Role of LLMs in Amplifying Conspiracy Theories and Fake News

Authors

  • Chathura Bandara Uva Wellassa University, Department of Computer Science, Passara, Badulla, Sri Lanka. Author

Abstract

Hallucinated output from large language models (LLMs) can serve as a potent source of disinformation in online ecosystems. Recent advances in neural architectures have enabled the generation of highly coherent text that is often difficult for untrained readers to distinguish from verified information. Hallucinations, which emerge when models generate content unaligned with factual data, exhibit patterns that can blend seamlessly with legitimate sources, posing a risk of amplifying conspiracy theories and other forms of fake news. These inaccuracies are not confined to trivial mistakes; they can reflect biases present in training data or exploit interpretative gaps in language modeling processes. Exacerbating this problem is the rapid velocity with which LLM-generated narratives can propagate across social media platforms and digital news outlets. Users may unknowingly share fabricated claims that appear credible due to advanced linguistic features and context-driven plausible details. This paper examines hallucination as disinformation, focusing on how it contributes to the spread of conspiracy theories and false narratives. Emphasis is placed on technical mechanisms that facilitate the generation of such content, including attention-based partial matching and unsupervised pattern formation. An analytical framework is presented to illustrate how hallucinated outputs feed into virulent information loops, transforming marginal ideas into seemingly robust arguments that challenge established knowledge.

Downloads

Published

2024-12-22

How to Cite

Hallucination as Disinformation: The Role of LLMs in Amplifying Conspiracy Theories and Fake News. (2024). Journal of Applied Cybersecurity Analytics, Intelligence, and Decision-Making Systems, 14(12), 65-76. https://sciencespress.com/index.php/JACAIDMS/article/view/14