In the previous post, hallucination was examined as a misalignment between strata within the semiotic architecture of language. Large language models can produce texts that are lexicogrammatically well-formed and semantically coherent while still failing to align with the contextual situation the user intended. The problem does not lie within the text itself, but in the relation between the meanings construed by the text and the context to which those meanings are supposed to apply.
This observation already points beyond linguistics toward a more general ontological question: what kind of phenomenon is a hallucination?
The common intuition is that hallucination represents a failure of representation. The model is thought to be producing statements that do not correspond to reality. Yet this framing assumes that meaning consists primarily in the accurate representation of an external world. From a relational perspective, the situation appears rather differently.
In a relational ontology, meaning does not arise from the correspondence between symbols and objects. It arises from relations of construal. A phenomenon comes into being when a particular construal stabilises within a network of relations: between observers, symbolic systems, and the situations those systems make available to experience.
Seen in this light, hallucination is not a mysterious production of false representations. It is a case in which the network of relations that normally stabilises meaning fails to align.
To see how this works, consider the situation in which a user asks an AI system about “Wuthering Heist.” The prompt introduces a phrase that the user intends as a reference to an episode of Inside No. 9. Yet the language model may instead interpret the phrase as a distorted reference to Wuthering Heights. From that point onward, the system may generate a coherent explanation of the novel.
From a representational perspective, the model has simply produced an incorrect answer. But a relational analysis reveals a more interesting structure. Several distinct relations are involved in the event:
-
the relation between the user and the prompt
-
the relation between the prompt and the model’s learned meaning potential
-
the relation between the generated text and the contextual pattern it activates
-
the relation between the text and the user’s intended situation
Hallucination occurs when these relations fail to converge on the same construal.
The prompt activates one region of the model’s meaning potential, while the user intends another. The model then produces a text that is internally coherent relative to its own construal, but misaligned with the situation the user had in mind. The resulting text appears erroneous not because it lacks coherence, but because the relational alignment that would stabilise the intended meaning never occurs.
In this sense, hallucination can be understood as a relational misfit.
The phenomenon does not reside solely within the AI system. It emerges from the interaction between multiple elements: the user’s prompt, the model’s probabilistic meaning potential, the textual instance generated by the system, and the contextual situation the user intended to invoke. Meaning arises only when these relations converge on a shared construal. When they do not, hallucination becomes visible.
This perspective also explains why hallucinations often feel strangely convincing. Within the relational structure internal to the generated text, everything may appear perfectly consistent. The semantic patterns align with one another, the discourse unfolds logically, and the explanation may even display considerable rhetorical sophistication. The difficulty lies not within that internal coherence, but in the relation between that coherence and the situation to which the text is supposed to refer.
Relational ontology therefore reframes the phenomenon in a subtle but important way. The hallucination is not located “inside” the AI. It is located in the misalignment of relations that normally stabilise meaning between participants, texts, and situations.
This shift in perspective has an important implication. If hallucination arises from relational misalignment, then attempts to eliminate hallucination entirely may be misguided. What matters is not the elimination of generative uncertainty, but the design of systems and interactions that help stabilise the relevant relations: clearer prompts, richer contextual cues, mechanisms for clarification and repair.
Generative AI, in this respect, reveals something fundamental about meaning itself. Meaning does not exist independently within symbols or minds. It emerges from the relations through which symbolic systems are brought into alignment with the situations they are used to construe.
Hallucination is what we observe when that alignment fails.
The final post in this series will consider the implications of this perspective. If hallucination is not simply a technical defect but a structural feature of generative meaning systems, then the goal of AI design cannot be to eliminate hallucination altogether. The more realistic challenge is to develop systems capable of recognising when the coherence of a generated text has outrun the grounding that would stabilise its meaning.
No comments:
Post a Comment