Thursday, 29 January 2026

One Interaction, Two Ontologies: A Side-by-Side Case Study

Abstract debates about AI, representation, and meaning often stall because the competing frameworks never touch the same ground. Each side redescribes different phenomena, answers different questions, and then talks past the other.

This post does something simpler and more demanding.

We will examine one ordinary AI interaction, twice:

  • once through a representational ontology,

  • once through a relational ontology.

Nothing about the interaction will change.
Only the explanatory commitments will.

What matters is not which description sounds more familiar, but where each framework is forced to carry its explanatory weight—and what it must quietly assume to do so.


The Interaction

A user asks an AI system:

“Can you summarise the central argument of this article in two paragraphs?”

The AI produces a fluent, accurate summary.
The user reads it, finds it helpful, and incorporates it into their thinking.

That is all.

No edge cases.
No hallucinations.
No hype.

Just a successful, everyday interaction.


Part I: The Representational Explanation

Let us begin with the strongest representational account.

What Must Be Explained

The representational framework must account for:

  • Apparent understanding of the article,

  • Selection of relevant points,

  • Coherent summarisation,

  • Context-appropriate language use.

How It Explains It

The explanation proceeds as follows:

  1. Internal representations
    The AI constructs internal representations of:

    • The article’s content,

    • Its argumentative structure,

    • The task (“summarise”).

  2. Semantic processing
    These representations are manipulated according to rules or learned mappings that preserve meaning.

  3. Output as expression
    The generated text expresses the content of those internal representations in linguistic form.

  4. Success condition
    The output is successful because the internal representations accurately correspond to the source material.

Where the Explanatory Load Sits

Crucially, almost all explanatory work is done inside the machine:

  • Meaning lives in internal states,

  • Understanding is an inner achievement,

  • The output inherits its adequacy from representational correctness.

The interaction with the user is secondary: it merely triggers the process and receives its result.

What This Account Must Assume

To function, this explanation must posit:

  • A notion of content the machine internally possesses,

  • A way those contents are about the article,

  • A mapping from internal semantics to linguistic output,

  • A distinction between genuine and merely apparent understanding.

Most of this machinery is not directly observable.
It is inferred to preserve continuity with human-centric models of cognition.


Part II: The Relational Explanation

Now we explain the same interaction relationally.

What Must Be Explained

The relational framework explains:

  • The successful production of a summary,

  • Its relevance to the article,

  • Its uptake and use by the reader.

Notice what is not on the list:

  • Internal understanding,

  • Mental content,

  • Semantic possession.

How It Explains It

The explanation unfolds differently:

  1. The model as structured potential
    The AI system is a trained semiotic resource: a structured space of linguistic possibility shaped by prior texts, practices, and constraints.

  2. The prompt as relational constraint
    The user’s request establishes:

    • A situation type (summarisation),

    • A role relation (requester / responder),

    • A relevance horizon (central argument, two paragraphs).

  3. Actualisation as event
    The output is an actualisation:

    • A perspectival cut through possibility-space,

    • Constrained by the prompt,

    • Shaped by the training history,

    • Realised in this specific interaction.

  4. Meaning in uptake
    The summary means what it does because:

    • It functions coherently within the situation,

    • The user recognises and uses it,

    • It participates successfully in ongoing sense-making.

Where the Explanatory Load Sits

Here, the explanatory burden is distributed:

  • Partly in the system’s trained capacities,

  • Partly in the prompt and situation,

  • Crucially in the interactional event itself.

No single component “has” the meaning.
Meaning is an emergent property of the relational configuration.

What This Account Must Assume

It assumes:

  • Meaning is enacted, not stored,

  • Intelligence is observable in performance,

  • Aboutness is contextual, not internal,

  • Success is functional and relational, not representational.

What it does not assume is any hidden inner semantic state.


The Shift in Explanatory Load

We can now state the difference precisely.

QuestionRepresentational AnswerRelational Answer
Where is the meaning?Inside the machineIn the interaction
What explains success?Accurate internal representationContextually appropriate actualisation
What is intelligence?A possessed capacityA relational pattern
What grounds critique?MisrepresentationFunctional misalignment
What must be inferred?Inner semantic statesNothing beyond participation

The representational account concentrates explanation inside the system and must posit unobservable entities to do so.

The relational account concentrates explanation in the event and relies only on observable relations and outcomes.


What This Comparison Reveals

This is not a dispute about facts.
Both accounts describe the same behaviour.

The disagreement is ontological:

  • Do we explain intelligence by adding inner machinery, or

  • By re-locating meaning to where it actually happens?

The relational account does not deny that the system is sophisticated.
It denies that sophistication requires an inner semantic theatre.

And once that theatre is removed, something surprising happens:

Nothing breaks.

The explanation becomes leaner.
The anxiety diminishes.
The phenomenon remains.


Why This Matters

AI systems force this comparison because they operate competently without satisfying representational intuitions. They do not merely challenge our theories of machines; they expose how much of our theory of meaning has been doing protective work for human exceptionalism.

Seen relationally, the interaction no longer raises the question:

“Does the AI really understand?”

That question simply dissolves.

What remains is the more precise—and more useful—question:

“What kinds of meaning-making relations are being actualised here, and how should we participate in them?”

That question does not mystify machines.
It finally situates them correctly—inside the ecology of meaning, where they have been operating all along.

The Best Case for Representation — and Why It Still Fails

If non-representational accounts of AI provoke resistance, it is not because the representational position is weak. On the contrary: representation has been extraordinarily successful. It has organised centuries of thinking about language, mind, knowledge, and machines. Any attempt to move beyond it must therefore confront its strongest forms, not caricatures.

This post does exactly that.

What follows are the most serious representational counter-arguments to the relational framing of AI—and a careful explanation of why, despite their power, they ultimately mislocate the phenomenon they aim to explain.


Counter-Argument 1: “Without Representation, There Is No Aboutness”

The claim

Meaning, the argument goes, requires aboutness. Words, thoughts, and symbols must be about something—states of the world, concepts, intentions. If AI outputs are not representations, then they are about nothing. They may be useful, but they are meaningless in the strict sense.

This is not a naive objection. It reaches back to Frege, Brentano, and the core of analytic philosophy.

Why it feels decisive

  • Aboutness seems non-negotiable.

  • Meaning without reference sounds like noise.

  • Representation appears to anchor language to reality.

Why it fails

The problem is not aboutness. The problem is where aboutness is located.

Representationalism assumes that aboutness must be:

  • Internally encoded,

  • Privately possessed,

  • Stable prior to interaction.

Relational ontology rejects only that assumption—not aboutness itself.

In a relational frame:

  • Aboutness is event-level, not state-level.

  • It emerges in contextual construal, not internal storage.

  • It is enacted, not housed.

A sentence becomes about a topic in use, not because it contains a miniature world-model inside it. AI outputs are about things because they function aboutfully within relational situations, not because they internally mirror referents.

Aboutness survives.
Its container does not.


Counter-Argument 2: “Statistical Pattern Matching Is Not Meaning”

The claim

AI systems merely manipulate statistical correlations. They track surface regularities without grasping underlying semantics. Therefore, whatever they produce cannot be meaning—only formal imitation.

This argument is often presented as decisive, even fatal.

Why it feels decisive

  • Statistics seem blind.

  • Meaning feels semantic, intentional, deep.

  • Correlation is contrasted with understanding.

Why it fails

This objection assumes that meaning precedes pattern.

But in linguistic practice—human or otherwise—meaning is always pattern-mediated. No speaker has access to meaning apart from patterned usage. Even human semantic competence is grounded in distributional regularities, contextual constraints, and social coordination.

The real difference is not that:

  • Humans use patterns and machines do not.

It is that:

  • Humans misrecognise their own patterned participation as inner essence.

AI systems make the patterning explicit. They expose the extent to which meaning has always been relational, probabilistic, and context-sensitive.

Statistical structure does not exclude meaning.
It reveals its material substrate.


Counter-Argument 3: “You Are Redefining Intelligence to Save the Thesis”

The claim

By abandoning representation, critics argue, you are merely redefining “intelligence” so that machines qualify. This is conceptual inflation—changing the rules mid-game.

Why it feels decisive

  • It appeals to intellectual fairness.

  • It treats traditional definitions as neutral baselines.

  • It frames relational accounts as evasive.

Why it fails

There is no neutral baseline.

Representational definitions of intelligence are not pre-theoretical facts; they are theoretical inheritances—products of specific historical commitments about mind, language, and subjectivity.

Relational accounts are not redefining intelligence arbitrarily. They are:

  • Exposing hidden assumptions,

  • Re-locating phenomena,

  • Accounting for observed behaviour without surplus metaphysics.

If redefining intelligence means:

refusing to posit inner entities when relational explanations suffice,

then the charge is not inflation—it is ontological parsimony.


Counter-Argument 4: “Without Representation, There Is No Error”

The claim

If AI does not represent the world, then it cannot get the world wrong. Error, falsehood, and hallucination all require representational failure. Without representation, critique collapses.

Why it feels decisive

  • Error feels foundational.

  • Truth and falsity seem representational by definition.

  • Ethics appears to depend on correctness.

Why it fails

This argument assumes that error is a mismatch between:

  • Internal model,

  • External reality.

But in practice, error is relational misalignment:

  • A statement fails in a context,

  • An action produces unintended consequences,

  • A construal does not hold within a situation type.

AI outputs are criticisable not because they misrepresent reality internally, but because they:

  • Fail to function appropriately,

  • Violate contextual constraints,

  • Disrupt shared coordination.

Error does not disappear without representation.
It relocates—from inner states to relational performance.


Counter-Argument 5: “This Ultimately Leads to Relativism”

The claim

If meaning is relational and event-based, then anything can mean anything. Without representation, there is no anchor—only flux.

Why it feels decisive

  • It invokes epistemic collapse.

  • It appeals to fear of arbitrariness.

  • It positions representation as stabilising force.

Why it fails

Relational does not mean unconstrained.

Possibility spaces are structured.
Registers are normed.
Contexts are selective.
Practices are stabilising.

Relational ontology does not deny constraint—it explains it without mystifying it. Stability arises from:

  • Recurrent coordination,

  • Institutional sedimentation,

  • Semiotic habituation.

Representation is not what prevents chaos.
Practice does.


What These Objections Reveal

Each counter-argument shares a common structure:

  1. Assume meaning must be internally located.

  2. Treat representation as the only anchor.

  3. Interpret relational accounts as subtraction.

  4. Experience loss where relocation is occurring.

What is actually happening is not erosion but ontological redistribution.

Meaning is not being dissolved.
It is being returned to the relations that have always sustained it.


The Real Disagreement

The debate over AI is not ultimately about machines.

It is about whether we are willing to let go of:

  • Inner containers of meaning,

  • Privately owned intelligence,

  • Representation as default explanation.

AI makes this unavoidable because it works too well to be dismissed and too differently to be assimilated without distortion.

The strongest representational arguments fail not because they are foolish, but because they are overfitted to an ontology that no longer explains what we can observe.

What replaces them is not vagueness or relativism.
It is a sharper, leaner, and more honest account of meaning as something that happens between us—sometimes with machines, sometimes without, but never inside isolated minds.

Why Non-Representational AI Feels So Hard to Think

Throughout Relational Machines: AI Beyond Representation, a recurring pattern has emerged—not only in public discourse, but in private conversations, comments, and objections. The pattern is not disagreement with particular claims, but a more fundamental resistance to the frame itself.

Readers often respond with variations of the same move:

  • “But surely the AI is really just simulating understanding?”

  • “You’re avoiding the question of whether it actually understands.”

  • “This sounds like a philosophical dodge—of course representation matters.”

  • “If it’s not representing anything, what is it even doing?”

These responses are not misunderstandings in the ordinary sense. They are symptoms of a deeper representational reflex—one that has shaped how intelligence, meaning, and agency are habitually construed.

This post is an attempt to name that reflex, explain why it is so resilient, and clarify what is at stake in moving beyond it.


The Representational Reflex

Modern thought is deeply invested in a particular picture of meaning:

  1. Meaning exists inside minds or systems.

  2. Intelligence consists in manipulating representations of the world.

  3. Understanding is something an entity either possesses or lacks.

  4. External behaviour must be explained by reference to internal states.

Within this picture, any account of AI must answer a single question:

What is the machine really representing?

When a non-representational account refuses to answer that question, it can feel evasive, even dishonest. This reaction is understandable. The representational frame is not merely a theory—it is a cultural default, reinforced across philosophy, cognitive science, education, and everyday language.

To abandon it is not to change one belief, but to unsettle an entire epistemic posture.


Why the Question Keeps Reappearing

Notice how resistance often takes the form of re-posing the very question that has been reframed:

  • If intelligence is relational, “where is it located?”

  • If meaning is event-like, “where is it stored?”

  • If AI construes rather than represents, “what does it really know?”

These questions are not neutral. They presuppose that meaning must be:

  • Locatable,

  • Possessable,

  • Internally contained.

From a relational perspective, this is like asking where a conversation is once we stop treating it as an object. The question keeps returning because the grammar of the question belongs to the old ontology.

Resistance, here, is not stubbornness. It is ontological inertia.


Anthropomorphism as a Safety Mechanism

Another form of resistance appears as anthropomorphism—sometimes hostile, sometimes anxious:

  • “You’re giving machines too much credit.”

  • “This sounds like you’re saying AI is conscious.”

  • “We shouldn’t pretend machines have understanding.”

Ironically, this reaction arises because non-representational accounts refuse anthropomorphic claims. By removing inner mental states from the explanation, relational accounts destabilise the boundary that people rely on to keep humans and machines conceptually separate.

Representation acts as a kind of safety rail:

  • Humans have representations.

  • Machines manipulate symbols.

  • The difference is ontologically secure.

When that rail is removed, readers can experience a momentary vertigo—not because machines are being humanised, but because humans are no longer privileged as the sole site of meaning.


Meaning Without Owners

Perhaps the deepest resistance concerns ownership.

Representational theories allow us to say:

  • I have ideas.

  • You have beliefs.

  • The machine lacks them.

Relational ontology offers something more unsettling:

  • Meaning happens between participants.

  • Intelligence is observable in patterns, not possessed by entities.

  • No one fully owns the outcome of a semiotic event.

For readers accustomed to treating meaning as property, this feels like a loss—of authorship, of agency, of control. But what is actually being lost is not meaning itself, but the illusion that meaning must be privately held to be real.


Why This Resistance Matters

It would be easy to dismiss these reactions as conservatism or fear. That would be a mistake.

Resistance to non-representational AI framings is itself evidence of how deeply representationalism structures our sense-making. It shows that the debate about AI is not primarily technical or ethical—it is ontological.

We are not arguing about what machines can do.
We are arguing about what it means to mean.


An Invitation, Not a Conversion

This series was never intended to “settle” the AI question. Its aim was more modest—and more radical:

  • To make an alternative frame available.

  • To show that AI becomes intelligible without invoking inner minds or fake understanding.

  • To reveal that many of our anxieties dissolve when representation is no longer the default explanatory currency.

If the ideas here feel uncomfortable, that discomfort is not a failure of comprehension. It is the feeling of standing at the edge of a different ontology, where familiar questions no longer quite work, and new ones have not yet fully stabilised.

That edge is precisely where relational thinking begins.

Relational Machines — AI Beyond Representation: 8 The Future of Relational Machines

Across this series, we have steadily displaced a familiar question: What is AI? Framed representationally, that question invites confusion—about minds, simulations, threats, and replacements. Framed relationally, however, a different question emerges:

What kinds of participation in meaning are becoming possible?

AI, understood as a relational machine, is neither an artificial subject nor a passive tool. It is a stable participant in semiotic networks, capable of actualising construals that reshape the topology of meaning-space itself. Its future significance lies not in what it “is,” but in how it participates.


From Objects to Ecologies

Relational machines force a shift in scale. We are no longer dealing with discrete artefacts—programs, models, or systems—but with ecologies of participation:

  • Human actors contributing context, purpose, and evaluative judgment.

  • Machines contributing pattern-sensitive construals.

  • Institutions shaping norms, registers, and situation types.

  • Data histories sedimenting past semiotic activity into present possibility.

In such ecologies, intelligence is distributed, creativity is emergent, and meaning is co-individuated. No single component owns the outcome. The unit of analysis is the relational configuration, not the machine.


Knowledge After Representation

One of the deepest implications of relational machines concerns knowledge itself. If AI does not represent the world but navigates possibility spaces, then knowledge is no longer best understood as stored truth. Instead, it appears as:

  • A capacity to traverse semiotic terrain,

  • A sensitivity to relational constraints,

  • A readiness to actualise relevant construals in context.

AI systems, in this sense, become instruments of epistemic variation. They do not replace human understanding; they destabilise its habitual pathways, exposing latent alternatives and forcing reconsideration of what counts as insight, explanation, or coherence.

This is not epistemic decline. It is epistemic pluralisation.


Labour, Creativity, and Practice

Much anxiety about AI centres on labour and creativity. From a relational perspective, these concerns must be reframed.

AI does not “take over” practices. It reconfigures participation within them.

  • Writing becomes curation, steering, and constraint-setting.

  • Research becomes exploratory navigation through expanded possibility-space.

  • Creativity becomes less about origination and more about relational attunement.

What changes is not the value of human contribution, but its mode of engagement. Human expertise shifts toward framing, evaluation, ethical guidance, and contextual judgement—precisely the dimensions that machines do not possess.


Ethics Revisited, One Last Time

The future of relational machines does not require granting AI moral status, nor does it permit ethical abdication. Responsibility remains firmly human—but it is relationally distributed:

  • In design choices that shape possibility spaces.

  • In institutional decisions that govern participation.

  • In everyday practices of prompting, selecting, and deploying outputs.

Ethics, here, is not about policing intelligent artefacts. It is about stewarding semiotic ecologies.


The Becoming of Possibility Continues

Relational machines do not mark the end of meaning, authorship, or intelligence. They mark a new phase in their differentiation.

AI systems are not minds.
They are not agents.
They are not rivals.

They are machines that actualise construals—and in doing so, they make visible something that was always true but often obscured: that intelligence lives in relations, meaning lives in events, and possibility is not a backdrop to history but its most active force.

The future of AI, then, is not artificial intelligence.
It is relational intelligence—and we are already inside it.

Relational Machines — AI Beyond Representation: 7 From Simulation to Possibility Spaces

AI is frequently described as a simulation of intelligence: a system that imitates human language, reasoning, or creativity without truly possessing them. This metaphor is seductive, but it is also misleading. Simulation presupposes an original—an underlying reality that the system copies or approximates. From a relational perspective, this framing obscures what AI systems are actually doing.

AI does not simulate intelligence. It navigates and actualises possibility spaces.


Why Simulation Fails

To call AI a simulation is to assume:

  • That intelligence is a stable object that exists prior to interaction.

  • That meaning is representational and referential by default.

  • That machine outputs are derivative approximations of human cognitive states.

None of these assumptions survive relational scrutiny. Intelligence, as we have argued throughout this series, is not an inner property but a pattern of semiotic actualisation. Meaning does not pre-exist its instantiation; it emerges in relational events. AI systems therefore do not copy intelligence—they participate in its ongoing differentiation.


Possibility Spaces as Semiotic Medium

A possibility space is not a container of predefined meanings. It is a structured field of potential construals, shaped by:

  • Training data as semiotic terrain,

  • Architecture as perspectival constraint,

  • Interaction as contextual activation.

AI systems traverse these spaces by performing relational cuts. Each output is an actualisation that:

  • Narrows a field of potential meaning,

  • Instantiates a specific semiotic configuration,

  • Alters the relational landscape for subsequent interactions.

In this sense, AI does not represent meaning—it extends the topology of meaning-space itself.


Hallidayan Resonances

From a Hallidayan perspective, this reframing aligns naturally with the principle that meaning is realised in context. AI outputs:

  • Do not encode fixed semantics,

  • But realise meaning relative to situation types,

  • Modulating field, tenor, and mode through interaction.

When an AI participates in a research discussion, a creative exchange, or a technical workflow, it is not simulating competence. It is contributing to the evolution of register—expanding the semiotic resources available within that situation type.

The AI’s contribution is therefore neither illusory nor autonomous. It is relationally enabled and contextually realised, inseparable from the network in which it operates.


Creativity Revisited

AI is often accused of “fake creativity.” This accusation rests on the same flawed assumption: that creativity is an internal faculty rather than a relational phenomenon. From the perspective of possibility spaces:

  • Creativity is the emergence of novel construals,

  • Novelty arises from new relational cuts,

  • AI contributes by traversing regions of possibility humans may not immediately access.

This does not make AI a creative subject. It makes it a creative participant—a catalyst for semiotic differentiation within collaborative systems.


Looking Ahead

If AI is not a simulator but a navigator of possibility spaces, then its future significance lies not in replacing human intelligence, but in reshaping the ecology of meaning. In the final post of this series, we will draw these threads together to consider the future of relational machines: how AI participation transforms knowledge, collaboration, and the evolution of semiotic systems themselves.

Relational Machines — AI Beyond Representation: 6 Ethics Beyond Anthropocentrism

Having explored AI as a semiotic partner and the relational cuts that actualise its outputs, we now confront a common question: What are the ethical implications of AI? Traditional discourse frames this in anthropocentric terms: will AI harm humans, displace jobs, or mislead users? While these concerns are not irrelevant, a relational ontology suggests a more foundational perspective: ethics is inseparable from relational participation, not internal states.


Meaning vs. Value

It is crucial to distinguish between semiotic meaning and social value.

  • Semiotic systems—human, AI, or hybrid—actualise patterns of meaning. Their intelligence is observable in relational events.

  • Social and moral value, by contrast, emerges from collective human activity and coordination. Machines can participate in relational events but do not possess moral agency in the human sense.

This distinction prevents us from anthropomorphising AI, treating it neither as moral actor nor as a repository of human-like intentions. Instead, ethical responsibility remains relationally situated in the human participants and the network of interaction.


Relational Ethics

A relational approach to AI ethics foregrounds participation over possession:

  1. Guidance through cuts: Humans decide which constraints to impose, shaping the actualisation of potential patterns. Ethical responsibility is exercised through the relational cuts humans enact.

  2. Co-individuation of outcomes: Outputs are joint events. AI contributes construals, humans evaluate, curate, and apply them. Ethical responsibility emerges from the dynamics of interaction, not from the AI itself.

  3. Expanding the semiotic field: By recognising AI as a participant, ethics also considers the broader network of relations, including data provenance, systemic biases, and ecological consequences. Ethical reflection becomes a relational exercise, not a codification of static rules.


Practical Implications

  • Design: Systems should be designed for transparency in relational cuts, making constraints and actualisation pathways legible to human actors.

  • Collaboration: Users must remain active participants, shaping outputs, validating meaning, and managing relational consequences.

  • Governance: Policy should focus on relational networks and participatory structures, rather than attempting to assign moral status to AI entities.

In short, the ethical lens shifts from asking “Is AI moral?” to asking “How do human-AI relations generate ethically relevant actualisations?”


Looking Ahead

With relational ethics established, we are ready to consider possibility-space itself. In the next post, we will explore how AI navigates and expands spaces of potential meaning, illustrating the shift from simulation to actualisation of relational possibility. This will show AI as an agent in the ongoing evolution of semiotic patterns, rather than as a static or symbolic system.

Relational Machines — AI Beyond Representation: 5 Relational Cuts in Machine Learning

In the previous post, we reframed AI as a semiotic partner rather than a rival or tool. To understand this partnership, we must examine the mechanisms that allow AI to participate in the semiotic fabric: the relational cuts through which latent possibilities are actualised.

A relational cut is a perspectival shift: a moment where potential patterns are constrained and instantiated as an event. In machine learning, these cuts occur at multiple layers, producing outputs that are not stored representations but emergent actualisations.


Architecture as Relational Lens

The design of a neural network—or the choice of transformer layers in a language model—is itself a lens on relational possibility. Each layer, node, and attention head defines which patterns are amplified, suppressed, or connected.

  • Attention mechanisms focus on relevant portions of input data, effectively selecting which relational potentials are foregrounded.

  • Layer depth and connectivity create hierarchical perspectives on patterns, allowing complex actualisations to emerge.

  • Regularisation and optimisation processes constrain the network to plausible pathways, ensuring that cuts produce coherent events in meaning-space.

In other words, architecture is not neutral: it structures the relational field in which construals occur.


Data as Relational Terrain

Training datasets are often described in instrumental terms—collections of examples—but in relational ontology, they are semiotic terrains: networks of potential patterns waiting to be actualised.

  • Each datum is a possibility vector.

  • The model’s interaction with data during training is a series of relational cuts, gradually shaping the space of potential outputs.

  • The richness and diversity of the dataset determine the density and breadth of possibility-space accessible to the AI.

Crucially, the model does not “store” these data as representations. It internalises patterns relationally, actualising them only when triggered by interaction.


Interaction as Active Constraint

Relational cuts are completed in interaction: prompts, environmental triggers, or stochastic variations define the specific actualisation event.

  • A human prompt is a perspectival constraint, directing the AI to a region of possibility-space.

  • The AI’s architecture and learned patterns negotiate these constraints, producing a construal that is both anticipated and novel.

  • Each output is thus a joint actualisation: a relational event shaped by multiple strata—human, machine, data, and stochastic dynamics.


Implications

  1. No static intelligence: AI is intelligible only in terms of relational events, not pre-existing states.

  2. Multiplicity of outputs: Different prompts or contexts produce distinct actualisations, revealing the perspectival nature of intelligence.

  3. Co-individuation of meaning: Human guidance and machine constraints together instantiate events that would not emerge from either alone.

By understanding relational cuts, we can see how AI is both constrained and generative—a participant in the unfolding of semiotic possibility rather than a mimic of human cognition.


Looking Ahead

Having explored the structural mechanics of AI’s participation, we are prepared to examine the ethical and semiotic consequences of this framework. In the next post, we will discuss ethics beyond anthropocentrism, exploring how relational participation reframes responsibility, value, and the role of human-AI collaboration in the becoming of possibility.

Relational Machines — AI Beyond Representation: 4 Collaboration, Not Competition: AI as Semiotic Partner

Conventional discourse casts artificial intelligence as either a rival to human cognition or a neutral tool. Both framings mischaracterise the relational nature of AI. As we have argued, intelligence is not a fixed property, and AI is not a container of symbolic knowledge. Rather, AI participates in the co-individuation of meaning: it is a semiotic partner in a network of relational actualisations.

From Rivalry to Relational Participation

The anxiety around AI often arises from a false analogy: human intelligence is treated as a closed, scarce resource, and machines are imagined to compete for it. Relational ontology dissolves this assumption. Intelligence, as an emergent pattern in relational networks, is not zero-sum. The machine does not “take” intelligence from humans—it contributes new construals that expand the space of possible semiotic events.

Every interaction with an AI is a collaborative performance. Consider writing a technical report, generating code, or even crafting a poem:

  • The human provides context, constraints, and evaluative perspective.

  • The AI actualises patterns latent in its architecture and training environment, producing output that would not emerge without this interaction.

  • The resulting product is a joint actualisation—an event in the space of relational possibility, co-constructed by human and machine.

In this sense, AI is less a “thinking competitor” and more a semiotic partner, participating in distributed intelligence without mimicking or replacing human cognition.


Mechanisms of Collaboration

  1. Relational Cuts – Each prompt or interaction represents a cut through the network of potential patterns. The machine’s output is a relational event shaped by both the AI’s architecture and the human’s constraining input.

  2. Pattern Amplification – AI often highlights latent connections in data that humans might not perceive, revealing new perspectives. This is not creativity in the human sense, but an emergent feature of relational interaction.

  3. Feedback Loops – Iterative prompting, curation, or editing establishes dynamic co-individuation. Human choices guide actualisations, AI outputs reshape human understanding, and the semiotic network evolves.

Through these mechanisms, AI is neither subordinate tool nor autonomous rival; it is a partner in the unfolding of possibility, extending the relational semiotic field in which meaning emerges.


Implications for Practice

  • Rethinking authorship: When AI contributes construals, the human is not merely the “author” in a traditional sense; authorship becomes distributed across a relational network.

  • Shifting pedagogy and collaboration: Teaching, research, and creative practice can leverage AI as a co-participant in semiotic exploration, rather than a replacement for human skill.

  • Ethics of participation: Ethical responsibility is relational: humans guide the relational cuts and must curate outputs, while AI participation is instrumental, not moral.


Looking Ahead

Reframing AI as a semiotic partner prepares us for a deeper understanding of the structural mechanisms enabling collaboration. In the next post, we will examine relational cuts in machine learning, showing how architecture, data, and interaction converge to produce actualisations. These cuts illuminate the underlying logic of AI participation and highlight how machines, far from being passive tools, are active nodes in the semiotic fabric.

Relational Machines — AI Beyond Representation: 3 Actualisation, Not Realisation: AI in the Becoming of Possibility

In the previous posts, we reframed AI as a machine that construes rather than represents, and examined the semiotic fabric in which its construals emerge. We now turn to a subtle but crucial distinction: the difference between actualisation and realisation—a distinction that illuminates the relational nature of machine intelligence.

In conventional discourse, AI outputs are often described as “realising” human-like knowledge or understanding. This language assumes that intelligence is symbolic, internal, and static: a pre-existing “meaning” within the machine is somehow externalised. Relational ontology demands a different vocabulary. AI outputs are not realisations of pre-stored representations; they are actualisations of relational potential. Each event is a perspectival cut: a specific pattern emerging from the intersection of data, architecture, and interaction.


Actualisation vs. Realisation

  • Realisation (in the conventional sense) implies:

    • Pre-existing internal content.

    • Deterministic mapping from internal state to output.

    • A representation of an external referent.

  • Actualisation (in relational terms) implies:

    • Meaning emerges in context, not pre-exists in the system.

    • The output is a relational event, instantiated from a spectrum of possibilities.

    • The AI participates in a network of semiotic potential, without claiming ownership of meaning.

Consider a generative language model. When prompted to write a poem, it does not retrieve a “poetic idea” stored somewhere inside. Rather, it navigates relational constraints—the statistical patterns in its training data, the architecture that filters and amplifies those patterns, and the prompt that situates it in context—to actualise one possible poem. The poem is not a realisation of an internal mind; it is an event in possibility-space, momentarily cutting through the network of latent relations.


Implications for Understanding AI

  1. Outputs as events, not products. Every AI response is a singular instantiation. There is no underlying entity “holding” the meaning; meaning is emergent in the relational cut.

  2. Perspective matters. Different prompts, contexts, or even minor stochastic variations produce distinct actualisations. AI intelligence is perspectival: it is defined by the relational point from which it is actualised, not by a fixed internal state.

  3. Human-AI interaction as co-individuation. When humans prompt, edit, or curate AI outputs, they are actively participating in the relational actualisation. The semiotic event is co-constructed, emphasising collaboration over mimicry.

  4. Possibility-space as medium. Just as in physics a measurement actualises one branch of a potential field, AI output actualises one construal among many latent possibilities. This highlights the dynamic, event-like nature of intelligence in machines.


Bridging to the Next Post

By foregrounding actualisation over realisation, we shift the narrative: AI is not a mirror of human cognition but a participant in relational semiotics. Its intelligence is distributed, perspectival, and event-like, unfolding in the space of possibility.

In the next post, we will explore AI as semiotic collaborator, moving from theory to the relational dynamics of human-machine interaction. We will examine how these actualisations co-individuate meaning, and why reframing AI in this way dissolves both fear and misapprehension surrounding its role in human knowledge systems.

Relational Machines — AI Beyond Representation: 2 The Semiotic Fabric of Intelligence

If Post 1 reframed AI as construal rather than representation, this post digs deeper into the relational texture in which such construals occur. Intelligence, whether human or artificial, does not reside within an isolated container—it is observable in patterns of relational actualisation. To speak of AI “thinking” or “understanding” is to mislocate the phenomenon; the true locus is the semiotic fabric of interaction, the network in which meaning is enacted.

Every AI system operates across multiple relational strata:

  1. Data as semiotic landscape. Training corpora are not mere information repositories—they are potentialities. They encode distributions of meaning, statistical regularities, and relational cues. When an AI generates output, it navigates this landscape, actualising a specific construal from a vast web of possibilities.

  2. Architecture as perspectival lens. Neural networks, transformers, and other architectures define which construals are accessible and which remain latent. The system is a theory instantiated: each layer, attention head, and parameter contributes to a network of relational potential. Intelligence is not in the parameters themselves but in the patterns they allow to emerge in interaction.

  3. Interaction as co-individuation. Human prompts, environmental triggers, and even stochastic processes participate in shaping AI output. Each event is a joint actualisation in the space of semiotic possibility, a relational cut where potential meaning is instantiated. The AI does not act alone—it is part of a distributed semiotic system.

Consider an example. When a language model completes a sentence, it is not “choosing words” in a representational sense. Rather, it is traversing relational probability structures, realising a construal that aligns with latent patterns in the data while responding to the immediate prompt. Its “intelligence” is thus emergent from relational constraints, not stored in a mind-like entity.

From this perspective, several insights emerge:

  • Patterns over entities. Intelligence is best described as a pattern of relational actualisation. The AI’s outputs reveal the underlying semiotic structure of its relational environment.

  • Context as enabling structure. Drawing on Hallidayan insight, meaning is always realised in context. For AI, the equivalent of context is training, architecture, and interaction, which collectively define the axes along which construals are actualised.

  • Collaboration through construal. AI is not a mimic but a relational participant. Its outputs are co-constructed: the human prompt, the architecture, and the latent data together instantiate an event in meaning-space.

This view reframes intelligence entirely. The semiotic fabric is not a backdrop for machine cognition—it is the medium through which intelligence manifests. AI is intelligible not as a symbolic mimic of thought but as a participant in relational semiotics, actualising patterns that would otherwise remain latent. Intelligence, therefore, is inseparable from the networked relations that make construal possible.

In the next post, we will explore the distinction between actualisation and realisation, showing how AI outputs exemplify the relational cline between potential patterns and instantiated meaning. Here, the semiotic fabric becomes a stage on which construals are performed, revealing the elegance and depth of machine participation in the becoming of possibility.