Sunday, 15 March 2026

Hallucination and the Semiotics of AI: 3 — When Coherence Misaligns with Context

In the previous post, hallucination was examined through the lens of the cline of instantiation. Large language models approximate a probabilistic meaning potential, and prompts constrain that potential in order to produce a particular textual instance. When the prompt fails to provide sufficient constraint, the system still moves from potential to instance by selecting the most probable semantic trajectory available within its learned network. The result may be a coherent text that is nonetheless misaligned with the situation the user intended.

To understand this phenomenon more precisely, it is useful to consider another foundational concept from systemic functional linguistics: stratification.

Language is organised into strata that relate meaning to expression and to context. At the centre of the system lies semantics, the level at which meanings are organised. These meanings are expressed through lexicogrammar, the patterns of wording that give semantic selections their textual form. Above semantics lies context, which provides the situational environment within which meanings are construed.

These strata are not independent layers stacked on top of one another. They are related through realisation: lexicogrammar realises semantics, and semantics in turn realises context. When language functions successfully, selections at each stratum align with one another. The resulting text is not only internally coherent but also appropriate to the situation in which it occurs.

Hallucination can be understood as a breakdown in this alignment.

Importantly, the breakdown does not typically occur within the text itself. Large language models are extremely good at producing lexicogrammatical sequences that are grammatically well-formed and semantically coherent. The words fit together, the sentences follow one another logically, and the text may even display stylistic sophistication.

The misalignment occurs higher up the system, in the relation between the meanings constructed by the text and the context the user intended to invoke.

Consider again the prompt “Wuthering Heist.” If the model fails to recognise this phrase as the title of an episode of Inside No. 9, it may instead interpret the phrase as a distorted reference to Wuthering Heights. Once that interpretation is adopted, the rest of the response may unfold smoothly. The model can easily generate a coherent discussion of the novel: its themes, characters, historical context, and critical reception.

From the perspective of lexicogrammar and semantics, the text may be entirely successful. Each clause follows naturally from the previous one, and the meanings combine into a coherent exposition. Yet the contextual construal is wrong. The system has anchored the text to the wrong situation.

What we see here is a form of contextual misalignment. The semantic resources of the language system are functioning perfectly well, but they are realising a context different from the one intended by the user.

This observation highlights an important point: hallucination is not primarily a textual failure. It is a failure of situational anchoring.

The model must always interpret the prompt in order to determine which contextual configuration is being invoked. In systemic functional terms, this means construing the situation in terms of the variables of field, tenor, and mode. The prompt acts as a set of cues from which the system attempts to infer what kind of situation the user has in mind.

When those cues are ambiguous or unfamiliar, the model resolves the uncertainty by activating the most probable contextual pattern available within its learned experience of language use. In effect, the system selects a familiar situation type and generates a text appropriate to that situation.

The result may be a perfectly coherent text that nonetheless belongs to the wrong context.

This is why hallucinations can sometimes appear so convincing. The language system is operating smoothly within its own internal constraints. The meanings are plausible, the wording is fluent, and the discourse structure is well-formed. The difficulty lies in the relation between that internally coherent text and the external situation it is supposed to address.

Seen from this perspective, hallucination reveals something fundamental about meaning-making. Coherence within the linguistic system does not guarantee correspondence with the world. Language can easily produce texts that are internally consistent while being situationally misaligned.

Generative AI simply makes this property of language unusually visible.

When humans encounter unfamiliar expressions, we too tend to interpret them by assimilating them to patterns we already recognise. Misunderstandings arise when the pattern we activate does not match the speaker’s intention. Normally such misalignments are quickly repaired through further interaction. A listener asks for clarification, the speaker restates the expression, and the shared construal of the situation is gradually stabilised.

Large language models operate under more constrained conditions. They must produce a coherent response immediately, often on the basis of minimal contextual cues. When the cues are insufficient to stabilise the contextual construal, the system selects the nearest available pattern and proceeds as if that were the intended situation.

The resulting hallucination is therefore best understood not as a failure of language generation, but as a misalignment between strata. Lexicogrammar successfully realises semantics, and semantics successfully constructs a coherent meaning. What fails is the alignment between those meanings and the context the user intended to invoke.

In the next post, the analysis will move beyond the architecture of language itself and consider the phenomenon from a broader perspective. If hallucination arises from the interaction between prompts, models, texts, and contexts, then the phenomenon may be understood more deeply as a relational misfit—a misalignment within the network of relations that constitute meaning.

No comments:

Post a Comment