Sunday, 15 March 2026

Hallucination and the Semiotics of AI: 2 — Meaning Potential and the Cline of Instantiation

In the previous post, hallucination was reframed as a consequence of how generative language systems produce coherent text. Rather than retrieving stored facts, such systems generate plausible continuations of meaning. When the system lacks a stable referential anchor, it may still produce a coherent construal by completing the nearest available semantic pattern.

To understand why this happens, it is useful to examine the phenomenon through one of the central concepts of systemic functional linguistics: the cline of instantiation.

Language, in this framework, is not a fixed inventory of sentences. It is a meaning potential—a structured set of possibilities that can be actualised in particular instances of use. Every time someone speaks or writes, they draw on this potential to produce a specific text suited to a particular situation.

Halliday describes the relation between the potential of the system and the particularity of individual texts as a cline. At one end lies the full potential of the language: the vast network of semantic and grammatical options available to speakers. At the other end lies the individual instance: the specific text that is produced in a particular moment of communication.

Between these poles lie intermediate levels of generalisation. Recurrent patterns of meaning associated with particular types of situation form registers—configurations of semantic tendencies that are likely to occur in particular contexts. From the perspective of potential, a register appears as a subpotential within the broader meaning system. From the perspective of instance, the same pattern appears as a type of text that tends to recur in similar situations.

The crucial point is that meaning-making always involves movement along this cline. A speaker begins with the resources of the meaning potential and progressively selects options that narrow the possibilities until a specific instance of text is produced.

Large language models operate in a strikingly similar way. During training, the system is exposed to immense quantities of language data. From this data it learns statistical relationships between words, phrases, and larger semantic configurations. The result is not a database of facts, but a probabilistic approximation of the language’s meaning potential.

When a user enters a prompt, the model must produce a specific textual instance. In effect, the prompt functions as a constraint on the system’s meaning potential. The model interprets the prompt, activates relevant patterns within its learned network, and begins generating a continuation that remains statistically consistent with those patterns.

Under favourable conditions, the prompt sufficiently constrains the system’s trajectory through the meaning potential. The generated text converges on an interpretation that aligns with the intended situation.

But when the prompt is ambiguous, incomplete, or unfamiliar, the constraints are weaker. The system must still move from potential toward instance—it must still produce a text. In doing so, it selects the most probable pathway available within its learned semantic network.

At this point, hallucination becomes possible.

Consider again the prompt “Wuthering Heist.” If the model does not recognise this phrase as referring to an episode of Inside No. 9, it must still interpret the input. Within its meaning potential, the closest stabilised configuration may be the cultural prominence of Wuthering Heights. The system therefore follows a high-probability trajectory anchored to that pattern and generates a coherent explanation of the novel.

From the perspective of the model’s internal probabilities, the response may be entirely reasonable. The system has moved from potential to instance along a plausible semantic pathway. The hallucination arises because that pathway does not correspond to the intended situation introduced by the user.

Seen through the lens of the cline of instantiation, hallucination is therefore not simply an error at the level of the generated text. The text itself may be perfectly well-formed and semantically coherent. The problem lies in the relation between the instance produced by the model and the contextual constraints that were meant to guide its selection.

In other words, the system has actualised an instance that is internally plausible within its meaning potential but externally misaligned with the situation.

This perspective clarifies why hallucinations are not easily eliminated. The model is designed to move from potential to instance by selecting high-probability patterns. If the prompt fails to sufficiently constrain that process, the system will still generate the most plausible continuation available to it. The alternative would be for the system to produce nothing at all whenever uncertainty arises—a behaviour very different from that expected of conversational language systems.

What generative AI reveals, perhaps more clearly than ever before, is the fundamentally probabilistic nature of language itself. Meaning potentials do not determine a single correct outcome. They structure a field of possibilities within which particular instances are more or less likely to occur.

Hallucination, in this sense, is what happens when the probabilistic logic of the meaning potential is allowed to determine the instance in the absence of sufficient contextual grounding.

In the next post, the analysis will shift from the cline of instantiation to another central concept in systemic functional linguistics: stratification. There we will see that hallucination can also be understood as a misalignment between strata—specifically, between the semantic and lexicogrammatical resources that produce a coherent text and the contextual construal that anchors that text to a particular situation.

No comments:

Post a Comment