Tuesday, 24 March 2026

The Illusion of the Illusion: On AI and the Misplaced Hunt for Intelligence

A recent brief in Nature (here) reports computer scientist Luc Julia urging us not to “believe the hype” of artificial intelligence. The claim is familiar: AI does not really possess intelligence; it merely creates the illusion of it. Like a magician’s trick, the appearance of cognition is produced by sleight of hand—terminological, statistical, engineered.

The corrective is intended to be sobering. AI is, we are told, “a tool created by humans, for humans,” its capacities bounded by parameters we set. There is, on this account, nothing here that warrants talk of intelligence in any substantive sense.

It is a neat argument. It is also misconceived.


The misplaced question

The debate is typically framed as a dispute over a property: is AI really intelligent, or not? The deflationary answer is no. The hype-driven answer is yes, or at least, increasingly so.

Both positions share the same assumption: that “intelligence” names an intrinsic feature of a system, something that can be either present or absent, discovered or denied.

This assumption does most of the work—and it is precisely where the problem lies.


From property to construal

“AI intelligence” is not a property waiting to be located inside a system. It is a construal of system behaviour—a way of organising what is observed, and, more importantly, what is done in response.

To describe a system as “intelligent” is not merely to report on it. It is to:

  • license certain expectations (adaptivity, competence, generalisation),
  • invite particular forms of trust or reliance,
  • and coordinate action around those expectations.

Equally, to insist that a system is “just a tool” is not a neutral clarification. It constrains the same field in the opposite direction:

  • limiting attributed capacity,
  • stabilising responsibility as purely human,
  • and foreclosing certain forms of anticipation or governance.

In both cases, what is at stake is not the discovery of a hidden property, but the organisation of a field of possible actions.


The illusion that matters

Julia’s analogy to illusion is rhetorically effective. But it misfires.

The suggestion is that AI only appears intelligent, and that clearer terminology would dispel the illusion. Yet this presumes a stable baseline—some unproblematic notion of “real intelligence” against which appearances can be judged.

No such baseline is available.

Human cognition itself is not transparently accessible. It is inferred from behaviour, reconstructed through theory, and continually re-described across disciplines. To invoke it as a fixed standard is not to clarify, but to naturalise a particular construal.

The real illusion, then, is not that AI seems intelligent when it is not. It is the belief that we are merely describing intelligence, rather than participating in its ongoing construction as a category.


Dual stabilisations

Once intelligence is understood as construal, the familiar polarisation around AI comes into focus.

On one side, catastrophic inflation: AI is treated as a rapidly advancing form of intelligence, with attendant risks of loss of control, displacement, or worse. This construal amplifies uncertainty into danger, coordinating precaution, centralisation, and urgency.

On the other, deflationary dismissal: AI is reduced to “just a tool,” its apparent capacities explained away as parameterised computation. This construal stabilises continuity, enabling rapid integration while muting calls for conceptual or institutional change.

These are not simply disagreements about facts. They are competing ways of organising social coordination under conditions of novelty.

Each is coherent. Each is partial. Each forecloses as much as it enables.


Parameters, and what they do not determine

The claim that AI systems are “defined by the parameters we set” is, in this light, revealing.

Of course, systems are designed. Architectures are specified; training regimes are constructed. But the behaviours that later become salient—generalisation, recombination, linguistic fluency—are not explicitly encoded. They emerge within the constraints, but are not reducible to them in any straightforward sense.

To point this out is not to mystify the system. It is simply to recognise that constraint does not exhaust behaviour, and that the space between design and performance is precisely where new construals become necessary.

To collapse that space under the heading of “just parameters” is less an explanation than a refusal to engage with what has appeared.


What is being coordinated

The question, then, is not whether AI is intelligent. It is:

What follows from treating it as such—or from refusing to do so?

Construing AI as intelligent:

  • opens space for rethinking labour, expertise, and authorship,
  • demands new forms of governance,
  • and redistributes trust across human and non-human systems.

Construing it as “just a tool”:

  • preserves existing institutional arrangements,
  • anchors responsibility firmly in human actors,
  • and enables rapid deployment without deep conceptual revision.

Neither stance is neutral. Each helps to bring about the world it presupposes.


Beyond the hunt

The persistent hunt for “real intelligence” in AI is thus a category error. It seeks a property where there is, instead, a field of relational effects structured by construal.

This does not mean that anything goes, or that all descriptions are equally useful. Some construals will prove more adequate than others—more capable of accommodating emergent behaviour, more responsive to breakdown, more precise in coordinating action.

But adequacy will not be achieved by deciding, once and for all, whether AI “really is” intelligent.

It will emerge through ongoing adjustment of how we construe, act, and revise in relation to these systems.


The final inversion

The brief in Nature aims to puncture an illusion: that AI possesses human-like intelligence.

In doing so, it installs another: that we stand outside the phenomenon, merely correcting misperceptions.

We do not.

In naming, denying, inflating, or deflating “AI intelligence,” we are not describing a fixed reality. We are participating in the coordination of what that reality becomes.

The question is no longer what AI is.

It is what follows from how we choose to say it is.

No comments:

Post a Comment