Thursday, 29 January 2026

Almost Meaning: Three LLM Exchanges and a Category Error

This post stages a small pedagogical experiment. We will look at a few ordinary interactions with a large language model. They will look like meaning. We will resist explaining them until the end.


Exchange One: Apparent Understanding

Prompt:
Explain, in simple terms, why people say that probability is not the same as meaning.

Response:
People say probability is not the same as meaning because probability deals with how often things happen, while meaning involves intention, interpretation, and understanding. Something can be statistically likely without being meaningful, and something meaningful doesn’t need to be frequent. Probability describes patterns; meaning concerns what those patterns signify to people.


Exchange Two: Apparent Context Sensitivity

Prompt:
That sounds abstract. Explain it again, but this time as if you were speaking to linguistics students who work with corpora and frequency data.

Response:
For corpus linguistics, probability shows up as frequency and distribution: which forms occur often, in which contexts, and with what co-text. These patterns are extremely useful, but they don’t themselves constitute meaning. Meaning emerges when speakers and writers use those resources in context, not from the frequencies alone. Corpora tell us what tends to happen, not what a particular instance means.


Exchange Three: Apparent Answerability

Prompt:
But you just said “meaning emerges when speakers use resources in context.” Isn’t that still suggesting probability somehow produces meaning?

Response:
You’re right to press on that. Probability doesn’t produce meaning on its own. What I meant is that speakers draw on probabilistically shaped resources when they make meaning. The probabilities reflect past usage, but the act of meaning happens when someone selects and uses those resources in a specific situation.


A Pause

At this point, many readers will feel that something like meaning has occurred. The model seems to understand the issue, track context, correct itself, and answer a challenge. Let’s slow down.


What Has Not Happened

  • No act has been performed.

  • No situation has been experienced.

  • No answerable commitment has been made.

  • No responsibility has been incurred.

Nothing has happened in the first-order sense. There has been no instantiation of meaning.


What Has Happened

  • Patterns of language use have been continued.

  • Conditional probabilities have been applied.

  • Distributional regularities have been replayed.

  • Past instantiations have left traces that shape the output.

What we are observing is second-order structure: the grain of instantiation rendered visible through fluent simulation.


The Category Error

The temptation is to treat this second-order patterning as first-order meaning — to let probability lean into agency. This is the “almost” error: mistaking patterned continuation for an act, simulation for answerability.

Nothing has selected. Nothing has answered. Nothing has meant.


The Relational Cut

Meaning occurs only in first-order acts: answerable instantiations made within a relational cut. Probability, frequency, context-conditioning, and fluency all survive those acts as traces. They never perform them.

The distinction is ontological, not technical.


Why This Matters

LLMs are pedagogically invaluable precisely because they expose this temptation so clearly. They can simulate understanding, context sensitivity, and even self-correction — but they never cross the cut into meaning.

Probability survives acts. It never performs them.

This post continues the Grain of Instantiation series by staging, rather than merely asserting, the ontological distinction between pattern and act.

No comments:

Post a Comment