If non-representational accounts of AI provoke resistance, it is not because the representational position is weak. On the contrary: representation has been extraordinarily successful. It has organised centuries of thinking about language, mind, knowledge, and machines. Any attempt to move beyond it must therefore confront its strongest forms, not caricatures.
This post does exactly that.
What follows are the most serious representational counter-arguments to the relational framing of AI—and a careful explanation of why, despite their power, they ultimately mislocate the phenomenon they aim to explain.
Counter-Argument 1: “Without Representation, There Is No Aboutness”
The claim
Meaning, the argument goes, requires aboutness. Words, thoughts, and symbols must be about something—states of the world, concepts, intentions. If AI outputs are not representations, then they are about nothing. They may be useful, but they are meaningless in the strict sense.
This is not a naive objection. It reaches back to Frege, Brentano, and the core of analytic philosophy.
Why it feels decisive
-
Aboutness seems non-negotiable.
-
Meaning without reference sounds like noise.
-
Representation appears to anchor language to reality.
Why it fails
The problem is not aboutness. The problem is where aboutness is located.
Representationalism assumes that aboutness must be:
-
Internally encoded,
-
Privately possessed,
-
Stable prior to interaction.
Relational ontology rejects only that assumption—not aboutness itself.
In a relational frame:
-
Aboutness is event-level, not state-level.
-
It emerges in contextual construal, not internal storage.
-
It is enacted, not housed.
A sentence becomes about a topic in use, not because it contains a miniature world-model inside it. AI outputs are about things because they function aboutfully within relational situations, not because they internally mirror referents.
Counter-Argument 2: “Statistical Pattern Matching Is Not Meaning”
The claim
AI systems merely manipulate statistical correlations. They track surface regularities without grasping underlying semantics. Therefore, whatever they produce cannot be meaning—only formal imitation.
This argument is often presented as decisive, even fatal.
Why it feels decisive
-
Statistics seem blind.
-
Meaning feels semantic, intentional, deep.
-
Correlation is contrasted with understanding.
Why it fails
This objection assumes that meaning precedes pattern.
But in linguistic practice—human or otherwise—meaning is always pattern-mediated. No speaker has access to meaning apart from patterned usage. Even human semantic competence is grounded in distributional regularities, contextual constraints, and social coordination.
The real difference is not that:
-
Humans use patterns and machines do not.
It is that:
-
Humans misrecognise their own patterned participation as inner essence.
AI systems make the patterning explicit. They expose the extent to which meaning has always been relational, probabilistic, and context-sensitive.
Counter-Argument 3: “You Are Redefining Intelligence to Save the Thesis”
The claim
By abandoning representation, critics argue, you are merely redefining “intelligence” so that machines qualify. This is conceptual inflation—changing the rules mid-game.
Why it feels decisive
-
It appeals to intellectual fairness.
-
It treats traditional definitions as neutral baselines.
-
It frames relational accounts as evasive.
Why it fails
There is no neutral baseline.
Representational definitions of intelligence are not pre-theoretical facts; they are theoretical inheritances—products of specific historical commitments about mind, language, and subjectivity.
Relational accounts are not redefining intelligence arbitrarily. They are:
-
Exposing hidden assumptions,
-
Re-locating phenomena,
-
Accounting for observed behaviour without surplus metaphysics.
If redefining intelligence means:
refusing to posit inner entities when relational explanations suffice,
then the charge is not inflation—it is ontological parsimony.
Counter-Argument 4: “Without Representation, There Is No Error”
The claim
If AI does not represent the world, then it cannot get the world wrong. Error, falsehood, and hallucination all require representational failure. Without representation, critique collapses.
Why it feels decisive
-
Error feels foundational.
-
Truth and falsity seem representational by definition.
-
Ethics appears to depend on correctness.
Why it fails
This argument assumes that error is a mismatch between:
-
Internal model,
-
External reality.
But in practice, error is relational misalignment:
-
A statement fails in a context,
-
An action produces unintended consequences,
-
A construal does not hold within a situation type.
AI outputs are criticisable not because they misrepresent reality internally, but because they:
-
Fail to function appropriately,
-
Violate contextual constraints,
-
Disrupt shared coordination.
Counter-Argument 5: “This Ultimately Leads to Relativism”
The claim
If meaning is relational and event-based, then anything can mean anything. Without representation, there is no anchor—only flux.
Why it feels decisive
-
It invokes epistemic collapse.
-
It appeals to fear of arbitrariness.
-
It positions representation as stabilising force.
Why it fails
Relational does not mean unconstrained.
Relational ontology does not deny constraint—it explains it without mystifying it. Stability arises from:
-
Recurrent coordination,
-
Institutional sedimentation,
-
Semiotic habituation.
What These Objections Reveal
Each counter-argument shares a common structure:
-
Assume meaning must be internally located.
-
Treat representation as the only anchor.
-
Interpret relational accounts as subtraction.
-
Experience loss where relocation is occurring.
What is actually happening is not erosion but ontological redistribution.
The Real Disagreement
The debate over AI is not ultimately about machines.
It is about whether we are willing to let go of:
-
Inner containers of meaning,
-
Privately owned intelligence,
-
Representation as default explanation.
AI makes this unavoidable because it works too well to be dismissed and too differently to be assimilated without distortion.
The strongest representational arguments fail not because they are foolish, but because they are overfitted to an ontology that no longer explains what we can observe.
No comments:
Post a Comment