Thursday, 9 April 2026

Meaning Without Construal: AI Under Constraint — 4 No Interpreter Inside: Why Internal States Do Not Construe

Large language models are often described as containing:

  • internal representations,
  • latent spaces,
  • embeddings of meaning,
  • or distributed semantic structure.

Even when these terms are used cautiously, they suggest:

that something like meaning exists within the system.

The claim is rarely explicit.

But it takes a familiar form:

  • the model does not merely produce patterns,
  • it organises internal states that correspond to meanings.

From this, a conclusion follows:

meaning is present internally, even if not directly observable.


1. What Internal States Actually Are

At a technical level, LLMs consist of:

  • layers of transformations,
  • weight matrices,
  • activation patterns,
  • and vector representations.

These internal states:

  • change dynamically during processing,
  • reflect learned statistical structure,
  • and influence output generation.

They are:

  • complex,
  • high-dimensional,
  • and highly structured.

But they are still:

configurations of transformation.


2. The Temptation of Representation

Because these states are structured, they are often interpreted as:

  • representing concepts,
  • encoding meanings,
  • or capturing semantic relations.

For example:

  • similar words cluster in vector space,
  • analogical relations can be computed,
  • latent dimensions appear interpretable.

This suggests:

that meaning is “inside” the model, in encoded form.


3. Mapping Is Not Meaning (Again)

These observations establish:

  • correlations between internal structure and linguistic patterns.

They show that:

  • the model’s internal organisation reflects regularities in language use.

But they do not establish:

that the model construes anything as anything.

Once again:

  • mapping is not meaning,
  • structure is not content.

An internal vector may correlate with “cat,”

but it is not:

the concept of a cat,
nor something taken as a cat.


4. No “As” in the System

Meaning requires:

an “as”-relation—something is taken as something.

Internal states, however:

  • do not differentiate between sign and object,
  • do not establish aboutness,
  • do not organise interpretation.

They participate in:

  • transformations from input to output,

not in:

construal.

There is no level within the system where:

  • tokens are interpreted as referring to entities,
  • or where relations are organised as meaning.

5. The Homunculus Problem

The idea that internal states contain meaning often implies:

  • an internal interpreter,
  • a subsystem that “reads” representations,
  • a locus where meaning is realised.

But this leads to a regress:

  • if a representation needs an interpreter,
  • then that interpreter must itself interpret.

And so on.

Ecological and enactivist approaches rejected this.

But LLM discourse often reintroduces it implicitly:

meaning is inside, but nowhere in particular.


6. Distributed Does Not Solve It

To avoid the homunculus, meaning is often said to be:

  • distributed across the network,
  • emergent from the whole system,
  • not localised in any single component.

This avoids localisation.

But it does not introduce:

construal.

A distributed pattern is still:

  • a pattern.

It does not become meaning simply by being:

  • spread out,
  • dynamic,
  • or complex.

7. Internal States as Constraint

Under constraint, internal states can be understood precisely:

they are configurations that constrain how inputs are transformed into outputs.

They:

  • encode statistical structure,
  • shape generation,
  • and enable flexibility.

But they do not:

  • interpret,
  • construe,
  • or mean.

8. No Hidden Layer of Meaning

There is no hidden layer where:

  • meaning “really” resides,
  • waiting to be uncovered by better analysis.

All layers of the system:

  • participate in transformation,
  • not in semiosis.

Meaning is not:

  • latent,
  • implicit,
  • or encoded internally.

Because:

encoding is not construal.


9. Reframing the Model Again

We can now state:

the model contains structured internal states that enable coherent language generation, but these states do not constitute meaning.

They are:

  • necessary for performance,
  • but insufficient for semiosis.

Closing Formulation

There is no interpreter inside the system.

Internal states constrain transformation—
they do not construe.

No representation, whether local or distributed,
becomes meaning simply by existing within the model.

Meaning requires an “as”-relation.

And that relation is not found in the internal dynamics of the system.


At this point, all internal attributions have been removed:

  • structure ≠ meaning
  • use ≠ meaning
  • internal states ≠ meaning

What remains is the only place left where meaning might appear:

in the interaction between system and user.


Next Post

“Coupling Without Construal: Where Meaning Actually Occurs”

No comments:

Post a Comment