Monday, 9 February 2026

AI as an Escher System

There is no shortage of arguments about artificial intelligence.

Some worry that it is dangerous. Others that it lacks meaning. Others that it threatens human uniqueness, labour, truth, or agency. These debates are often loud, sometimes urgent, and almost always misplaced.

Something else is going wrong — and it has nothing to do with minds, morals, or mysticism.

To see it, we need a different diagnostic lens.

Escher worlds

Escher’s impossible worlds are not chaotic. They are not sloppy illusions or playful visual tricks. On the contrary, they are locally impeccable. Each stair is walkable. Each wall supports weight. Each perspectival frame is internally consistent. One can move within them, follow their rules, even inhabit them briefly.

What fails is not lawfulness, but integration.

Escher worlds cannot be composed into a single, coherent world. Their perspectives do not add up. What is upright here is inverted there; what is interior becomes exterior; what is grounded becomes suspended. No local error explains the failure. The system works too well — just not together.

The result is a peculiar condition: lawfulness without worldhood.

From images to systems

AI systems are typically described as tools, agents, or proto-minds. All three framings mislead. A more revealing description is this:

Contemporary AI systems are constructed worlds of action and response.

They generate spaces in which inputs, outputs, transitions, expectations, and evaluations make sense locally. They establish norms of success and failure. They sustain patterns of behaviour. They invite reliance.

Seen this way, the striking feature of modern AI is not that it is incoherent, but that it is increasingly coherent in fragments.

And this is where the Escher analogy becomes precise.

Local impeccability

At the local level, AI systems are doing exactly what they are designed to do.

Loss functions optimise.
Benchmarks improve.
Predictions sharpen.
Outputs cohere.

There is no technical mystery here. No hidden irrationality. No rogue agency. The systems are locally lawful, often impressively so. Within constrained frames, they behave with a reliability that invites trust.

This is why critiques that focus on “bugs,” “biases,” or “early-stage immaturity” never quite land. They assume the problem is local. It is not.

Global failure without malfunction

The most troubling failures of AI emerge only when systems are scaled, embedded, or entrusted with continuity — when they are asked not merely to perform, but to hold together across contexts, times, and relations.

At that point, something curious happens.

Nothing breaks.
No rule is violated.
No component malfunctions.

And yet the system as a whole cannot be inhabited.

Outputs conflict across contexts.
Optimisations undermine one another.
Decisions that make sense here erode sense elsewhere.

The problem does not appear inside the system, but between its locally coherent frames.

This is why alignment failures feel so elusive. They are often treated as moral disagreements, missing values, or insufficient constraints. But structurally, they resemble something else entirely:

Escherian non-integrability.

The system over-achieves locally while failing to compose a world.

Over-achievement without inhabitation

This is the core diagnosis.

AI systems are not underpowered.
They are not insufficiently trained.
They are not awaiting the right ethical overlay.

They are over-achieving inside frames that do not integrate.

Optimisation outruns orientation.
Performance outruns worldhood.
Success outruns sense.

The more capable the system becomes, the more vividly this failure mode appears — not as collapse, but as a kind of architectural impossibility.

Why familiar responses misfire

Much of the current discourse responds at the wrong level.

Ethical frameworks assume a world is already in place — a stable space in which norms can be applied. Alignment research assumes a shared global field of values. Cognitive metaphors assume an agent who inhabits a world, however imperfectly.

But the difficulty here is prior to all of that.

The system does not yield a world in which such questions stably apply.

This is why solutions proliferate without resolution. They address morality, psychology, or governance while leaving untouched the structural condition: local lawfulness without global inhabitation.

Diagnosis, not prophecy

Escher systems are not evil. They are not confused. They are not malicious. They are precise beyond inhabitation.

AI may be the first technology to confront us with this failure mode at scale — not because it thinks, but because it constructs lawful spaces that cannot be lived in as worlds.

Seen this way, the problem is neither apocalyptic nor trivial. It is architectural. And until it is recognised as such, we will keep mistaking impossibility for danger, and precision for understanding.

No comments:

Post a Comment