Sunday, 28 December 2025

The Intolerances of Scientific Explanation: 2 Artificial Intelligence and the Intolerance of Agency

Artificial intelligence does not introduce a new problem.
It exposes an old one without the protective ambiguities of biology.

Where neuroscience still deals with organisms — embodied, historical, affect-laden — artificial intelligence presents behaviour without life, optimisation without experience, and performance without perspective. What was previously contested now becomes stark.

The discomfort that surrounds AI is therefore not primarily ethical, social, or speculative. It is ontological.


Behaviour Without a Subject

Artificial intelligence stabilises a particularly severe explanatory cut:

  • behaviour is produced by optimisation,

  • success is defined by performance metrics,

  • learning is adjustment within a predefined space of possibility.

From within this frame, AI systems can outperform humans, adapt to environments, and generate outputs indistinguishable from intentional action.

What they do not do is act.

There is no perspective from which the behaviour occurs. No stake, no concern, no situated point of view. The system performs, but it does not participate.

This absence is not accidental. It is required.


Optimisation as Explanatory Closure

AI explanation is at its most confident when it is most minimal:

  • inputs,

  • objectives,

  • loss functions,

  • updates.

Nothing else is needed. No meaning, no intention, no understanding.

This is not a failure of AI. It is its triumph. The system works precisely because agency has been excluded from the explanatory frame.

But this exclusion carries a cost.


The Return of Agency

The debates that surround AI are strikingly repetitive:

  • Is it intelligent?

  • Does it understand?

  • Can it intend?

  • Is it responsible?

  • Is it conscious?

These questions are often dismissed as category errors — as anthropomorphic confusion or misplaced intuition. But their persistence suggests something else.

Agency is not being mistakenly projected. It is being structurally withheld, and its absence is felt.

The discomfort does not arise because AI is misunderstood, but because behaviour now appears without construal.


Agency as Relational Phenomenon

Agency is not a property that can be added to a system once performance is sufficient.

It is:

  • perspectival,

  • situated,

  • historically and socially constituted,

  • bound to participation in a field of significance.

Agency exists only where actions matter to the one acting — where outcomes are not merely optimised, but owned.

AI systems do not lack agency because they are insufficiently complex. They lack it because they are not sites of construal.


Why the Intolerance Intensifies

In neuroscience, meaning resisted reduction while remaining attached to living bodies. In AI, that attachment is gone.

The same explanatory cut now produces a sharper effect:

  • behaviour is flawless,

  • performance is measurable,

  • explanation is complete,

  • and yet something is unmistakably missing.

This generates an intolerance of agency — a refusal to accept that behaviour alone exhausts action.

The insistence that “nothing is missing” is precisely what provokes resistance.


Responsibility Without Ownership

This intolerance becomes acute around responsibility:

  • Who is accountable?

  • Who decides?

  • Who acts?

The answers cannot be found inside the system. Responsibility does not belong to optimisation procedures.

The attempt to locate agency where there is only execution produces anxiety — not because the system is dangerous, but because the explanatory cut has removed the very locus where responsibility normally resides.


A Familiar Structure, Now Exposed

Once again, the same pattern appears:

  1. A field of constrained possibility (action within a world of significance).

  2. A necessary explanatory cut (optimisation and performance).

  3. Extraordinary success.

  4. Suppression of construal.

  5. Intolerance and unease.

AI does not create this structure. It reveals it without distraction.


What AI Forces Us to See

Artificial intelligence confronts scientific explanation with its own limit:

  • Explanation can produce behaviour.

  • It cannot produce participation.

  • It can optimise outcomes.

  • It cannot generate perspective.

The discomfort surrounding AI is not a failure to understand machines. It is a recognition — often unarticulated — that agency cannot be engineered by optimisation alone.


Closing the Loop

From quantum theory to evolutionary biology, from neuroscience to artificial intelligence, the same cut has been repeated:

  • stabilise explanation,

  • suppress relational excess,

  • provoke resistance.

Artificial intelligence is simply the point at which the absence becomes impossible to ignore.

What remains is not to resolve this tension, but to learn how to read it — not as error, not as fear, but as a structural signal of what explanation cannot contain.

No comments:

Post a Comment