Hallucination as Disinformation: The Role of LLMs in Amplifying Conspiracy Theories and Fake News
Abstract
Hallucinated output from large language models (LLMs) can serve as a potent source of disinformation in online ecosystems. Recent advances in neural architectures have enabled the generation of highly coherent text that is often difficult for untrained readers to distinguish from verified information. Hallucinations, which emerge when models generate content unaligned with factual data, exhibit patterns that can blend seamlessly with legitimate sources, posing a risk of amplifying conspiracy theories and other forms of fake news. These inaccuracies are not confined to trivial mistakes; they can reflect biases present in training data or exploit interpretative gaps in language modeling processes. Exacerbating this problem is the rapid velocity with which LLM-generated narratives can propagate across social media platforms and digital news outlets. Users may unknowingly share fabricated claims that appear credible due to advanced linguistic features and context-driven plausible details. This paper examines hallucination as disinformation, focusing on how it contributes to the spread of conspiracy theories and false narratives. Emphasis is placed on technical mechanisms that facilitate the generation of such content, including attention-based partial matching and unsupervised pattern formation. An analytical framework is presented to illustrate how hallucinated outputs feed into virulent information loops, transforming marginal ideas into seemingly robust arguments that challenge established knowledge.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 author

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.