Throughout Relational Machines: AI Beyond Representation, a recurring pattern has emerged—not only in public discourse, but in private conversations, comments, and objections. The pattern is not disagreement with particular claims, but a more fundamental resistance to the frame itself.
Readers often respond with variations of the same move:
-
“But surely the AI is really just simulating understanding?”
-
“You’re avoiding the question of whether it actually understands.”
-
“This sounds like a philosophical dodge—of course representation matters.”
-
“If it’s not representing anything, what is it even doing?”
These responses are not misunderstandings in the ordinary sense. They are symptoms of a deeper representational reflex—one that has shaped how intelligence, meaning, and agency are habitually construed.
This post is an attempt to name that reflex, explain why it is so resilient, and clarify what is at stake in moving beyond it.
The Representational Reflex
Modern thought is deeply invested in a particular picture of meaning:
-
Meaning exists inside minds or systems.
-
Intelligence consists in manipulating representations of the world.
-
Understanding is something an entity either possesses or lacks.
-
External behaviour must be explained by reference to internal states.
Within this picture, any account of AI must answer a single question:
What is the machine really representing?
When a non-representational account refuses to answer that question, it can feel evasive, even dishonest. This reaction is understandable. The representational frame is not merely a theory—it is a cultural default, reinforced across philosophy, cognitive science, education, and everyday language.
To abandon it is not to change one belief, but to unsettle an entire epistemic posture.
Why the Question Keeps Reappearing
Notice how resistance often takes the form of re-posing the very question that has been reframed:
-
If intelligence is relational, “where is it located?”
-
If meaning is event-like, “where is it stored?”
-
If AI construes rather than represents, “what does it really know?”
These questions are not neutral. They presuppose that meaning must be:
-
Locatable,
-
Possessable,
-
Internally contained.
From a relational perspective, this is like asking where a conversation is once we stop treating it as an object. The question keeps returning because the grammar of the question belongs to the old ontology.
Resistance, here, is not stubbornness. It is ontological inertia.
Anthropomorphism as a Safety Mechanism
Another form of resistance appears as anthropomorphism—sometimes hostile, sometimes anxious:
-
“You’re giving machines too much credit.”
-
“This sounds like you’re saying AI is conscious.”
-
“We shouldn’t pretend machines have understanding.”
Ironically, this reaction arises because non-representational accounts refuse anthropomorphic claims. By removing inner mental states from the explanation, relational accounts destabilise the boundary that people rely on to keep humans and machines conceptually separate.
Representation acts as a kind of safety rail:
-
Humans have representations.
-
Machines manipulate symbols.
-
The difference is ontologically secure.
When that rail is removed, readers can experience a momentary vertigo—not because machines are being humanised, but because humans are no longer privileged as the sole site of meaning.
Meaning Without Owners
Perhaps the deepest resistance concerns ownership.
Representational theories allow us to say:
-
I have ideas.
-
You have beliefs.
-
The machine lacks them.
Relational ontology offers something more unsettling:
-
Meaning happens between participants.
-
Intelligence is observable in patterns, not possessed by entities.
-
No one fully owns the outcome of a semiotic event.
For readers accustomed to treating meaning as property, this feels like a loss—of authorship, of agency, of control. But what is actually being lost is not meaning itself, but the illusion that meaning must be privately held to be real.
Why This Resistance Matters
It would be easy to dismiss these reactions as conservatism or fear. That would be a mistake.
Resistance to non-representational AI framings is itself evidence of how deeply representationalism structures our sense-making. It shows that the debate about AI is not primarily technical or ethical—it is ontological.
An Invitation, Not a Conversion
This series was never intended to “settle” the AI question. Its aim was more modest—and more radical:
-
To make an alternative frame available.
-
To show that AI becomes intelligible without invoking inner minds or fake understanding.
-
To reveal that many of our anxieties dissolve when representation is no longer the default explanatory currency.
If the ideas here feel uncomfortable, that discomfort is not a failure of comprehension. It is the feeling of standing at the edge of a different ontology, where familiar questions no longer quite work, and new ones have not yet fully stabilised.
That edge is precisely where relational thinking begins.
No comments:
Post a Comment