Abstract debates about AI, representation, and meaning often stall because the competing frameworks never touch the same ground. Each side redescribes different phenomena, answers different questions, and then talks past the other.
This post does something simpler and more demanding.
We will examine one ordinary AI interaction, twice:
-
once through a representational ontology,
-
once through a relational ontology.
What matters is not which description sounds more familiar, but where each framework is forced to carry its explanatory weight—and what it must quietly assume to do so.
The Interaction
A user asks an AI system:
“Can you summarise the central argument of this article in two paragraphs?”
That is all.
Just a successful, everyday interaction.
Part I: The Representational Explanation
Let us begin with the strongest representational account.
What Must Be Explained
The representational framework must account for:
-
Apparent understanding of the article,
-
Selection of relevant points,
-
Coherent summarisation,
-
Context-appropriate language use.
How It Explains It
The explanation proceeds as follows:
-
Internal representationsThe AI constructs internal representations of:
-
The article’s content,
-
Its argumentative structure,
-
The task (“summarise”).
-
-
Semantic processingThese representations are manipulated according to rules or learned mappings that preserve meaning.
-
Output as expressionThe generated text expresses the content of those internal representations in linguistic form.
-
Success conditionThe output is successful because the internal representations accurately correspond to the source material.
Where the Explanatory Load Sits
Crucially, almost all explanatory work is done inside the machine:
-
Meaning lives in internal states,
-
Understanding is an inner achievement,
-
The output inherits its adequacy from representational correctness.
The interaction with the user is secondary: it merely triggers the process and receives its result.
What This Account Must Assume
To function, this explanation must posit:
-
A notion of content the machine internally possesses,
-
A way those contents are about the article,
-
A mapping from internal semantics to linguistic output,
-
A distinction between genuine and merely apparent understanding.
Part II: The Relational Explanation
Now we explain the same interaction relationally.
What Must Be Explained
The relational framework explains:
-
The successful production of a summary,
-
Its relevance to the article,
-
Its uptake and use by the reader.
Notice what is not on the list:
-
Internal understanding,
-
Mental content,
-
Semantic possession.
How It Explains It
The explanation unfolds differently:
-
The model as structured potentialThe AI system is a trained semiotic resource: a structured space of linguistic possibility shaped by prior texts, practices, and constraints.
-
The prompt as relational constraintThe user’s request establishes:
-
A situation type (summarisation),
-
A role relation (requester / responder),
-
A relevance horizon (central argument, two paragraphs).
-
-
Actualisation as eventThe output is an actualisation:
-
A perspectival cut through possibility-space,
-
Constrained by the prompt,
-
Shaped by the training history,
-
Realised in this specific interaction.
-
-
Meaning in uptakeThe summary means what it does because:
-
It functions coherently within the situation,
-
The user recognises and uses it,
-
It participates successfully in ongoing sense-making.
-
Where the Explanatory Load Sits
Here, the explanatory burden is distributed:
-
Partly in the system’s trained capacities,
-
Partly in the prompt and situation,
-
Crucially in the interactional event itself.
What This Account Must Assume
It assumes:
-
Meaning is enacted, not stored,
-
Intelligence is observable in performance,
-
Aboutness is contextual, not internal,
-
Success is functional and relational, not representational.
What it does not assume is any hidden inner semantic state.
The Shift in Explanatory Load
We can now state the difference precisely.
| Question | Representational Answer | Relational Answer |
|---|---|---|
| Where is the meaning? | Inside the machine | In the interaction |
| What explains success? | Accurate internal representation | Contextually appropriate actualisation |
| What is intelligence? | A possessed capacity | A relational pattern |
| What grounds critique? | Misrepresentation | Functional misalignment |
| What must be inferred? | Inner semantic states | Nothing beyond participation |
The representational account concentrates explanation inside the system and must posit unobservable entities to do so.
The relational account concentrates explanation in the event and relies only on observable relations and outcomes.
What This Comparison Reveals
The disagreement is ontological:
-
Do we explain intelligence by adding inner machinery, or
-
By re-locating meaning to where it actually happens?
And once that theatre is removed, something surprising happens:
Nothing breaks.
Why This Matters
AI systems force this comparison because they operate competently without satisfying representational intuitions. They do not merely challenge our theories of machines; they expose how much of our theory of meaning has been doing protective work for human exceptionalism.
Seen relationally, the interaction no longer raises the question:
“Does the AI really understand?”
That question simply dissolves.
What remains is the more precise—and more useful—question:
“What kinds of meaning-making relations are being actualised here, and how should we participate in them?”
No comments:
Post a Comment