Thursday, 29 January 2026

One Interaction, Two Ontologies: A Side-by-Side Case Study

Abstract debates about AI, representation, and meaning often stall because the competing frameworks never touch the same ground. Each side redescribes different phenomena, answers different questions, and then talks past the other.

This post does something simpler and more demanding.

We will examine one ordinary AI interaction, twice:

  • once through a representational ontology,

  • once through a relational ontology.

Nothing about the interaction will change.
Only the explanatory commitments will.

What matters is not which description sounds more familiar, but where each framework is forced to carry its explanatory weight—and what it must quietly assume to do so.


The Interaction

A user asks an AI system:

“Can you summarise the central argument of this article in two paragraphs?”

The AI produces a fluent, accurate summary.
The user reads it, finds it helpful, and incorporates it into their thinking.

That is all.

No edge cases.
No hallucinations.
No hype.

Just a successful, everyday interaction.


Part I: The Representational Explanation

Let us begin with the strongest representational account.

What Must Be Explained

The representational framework must account for:

  • Apparent understanding of the article,

  • Selection of relevant points,

  • Coherent summarisation,

  • Context-appropriate language use.

How It Explains It

The explanation proceeds as follows:

  1. Internal representations
    The AI constructs internal representations of:

    • The article’s content,

    • Its argumentative structure,

    • The task (“summarise”).

  2. Semantic processing
    These representations are manipulated according to rules or learned mappings that preserve meaning.

  3. Output as expression
    The generated text expresses the content of those internal representations in linguistic form.

  4. Success condition
    The output is successful because the internal representations accurately correspond to the source material.

Where the Explanatory Load Sits

Crucially, almost all explanatory work is done inside the machine:

  • Meaning lives in internal states,

  • Understanding is an inner achievement,

  • The output inherits its adequacy from representational correctness.

The interaction with the user is secondary: it merely triggers the process and receives its result.

What This Account Must Assume

To function, this explanation must posit:

  • A notion of content the machine internally possesses,

  • A way those contents are about the article,

  • A mapping from internal semantics to linguistic output,

  • A distinction between genuine and merely apparent understanding.

Most of this machinery is not directly observable.
It is inferred to preserve continuity with human-centric models of cognition.


Part II: The Relational Explanation

Now we explain the same interaction relationally.

What Must Be Explained

The relational framework explains:

  • The successful production of a summary,

  • Its relevance to the article,

  • Its uptake and use by the reader.

Notice what is not on the list:

  • Internal understanding,

  • Mental content,

  • Semantic possession.

How It Explains It

The explanation unfolds differently:

  1. The model as structured potential
    The AI system is a trained semiotic resource: a structured space of linguistic possibility shaped by prior texts, practices, and constraints.

  2. The prompt as relational constraint
    The user’s request establishes:

    • A situation type (summarisation),

    • A role relation (requester / responder),

    • A relevance horizon (central argument, two paragraphs).

  3. Actualisation as event
    The output is an actualisation:

    • A perspectival cut through possibility-space,

    • Constrained by the prompt,

    • Shaped by the training history,

    • Realised in this specific interaction.

  4. Meaning in uptake
    The summary means what it does because:

    • It functions coherently within the situation,

    • The user recognises and uses it,

    • It participates successfully in ongoing sense-making.

Where the Explanatory Load Sits

Here, the explanatory burden is distributed:

  • Partly in the system’s trained capacities,

  • Partly in the prompt and situation,

  • Crucially in the interactional event itself.

No single component “has” the meaning.
Meaning is an emergent property of the relational configuration.

What This Account Must Assume

It assumes:

  • Meaning is enacted, not stored,

  • Intelligence is observable in performance,

  • Aboutness is contextual, not internal,

  • Success is functional and relational, not representational.

What it does not assume is any hidden inner semantic state.


The Shift in Explanatory Load

We can now state the difference precisely.

QuestionRepresentational AnswerRelational Answer
Where is the meaning?Inside the machineIn the interaction
What explains success?Accurate internal representationContextually appropriate actualisation
What is intelligence?A possessed capacityA relational pattern
What grounds critique?MisrepresentationFunctional misalignment
What must be inferred?Inner semantic statesNothing beyond participation

The representational account concentrates explanation inside the system and must posit unobservable entities to do so.

The relational account concentrates explanation in the event and relies only on observable relations and outcomes.


What This Comparison Reveals

This is not a dispute about facts.
Both accounts describe the same behaviour.

The disagreement is ontological:

  • Do we explain intelligence by adding inner machinery, or

  • By re-locating meaning to where it actually happens?

The relational account does not deny that the system is sophisticated.
It denies that sophistication requires an inner semantic theatre.

And once that theatre is removed, something surprising happens:

Nothing breaks.

The explanation becomes leaner.
The anxiety diminishes.
The phenomenon remains.


Why This Matters

AI systems force this comparison because they operate competently without satisfying representational intuitions. They do not merely challenge our theories of machines; they expose how much of our theory of meaning has been doing protective work for human exceptionalism.

Seen relationally, the interaction no longer raises the question:

“Does the AI really understand?”

That question simply dissolves.

What remains is the more precise—and more useful—question:

“What kinds of meaning-making relations are being actualised here, and how should we participate in them?”

That question does not mystify machines.
It finally situates them correctly—inside the ecology of meaning, where they have been operating all along.

No comments:

Post a Comment