Monday, 9 February 2026

Escher Effects in Reference Space (AI Hallucinations as Dark Matter)

AI systems do not hallucinate in any ordinary sense. There is no perceptual failure, no misfiring neuron, no broken rule. Each transition — each token, each prediction, each probability — is locally impeccable. The system is doing exactly what it is designed to do.

And yet, from the perspective of a human world, it can produce outputs that seem false, inconsistent, or invented. Citations appear where none exist. Explanations misfire. Facts dissolve.

This is not error. It is a structural effect.

Local Lawfulness, Global Non-Integrability

The phenomenon is exactly analogous to what we described in cosmology when discussing dark matter and dark energy. There, equations work perfectly at every point, observations are stable, but the global model fails to close. Physicists respond by positing an invisible substance — dark matter — to patch the gap between lawfulness and worldhood.

In AI, the “hallucination” label functions in a similar way. We project the expectation of a globally coherent reference system onto a locally lawful, Escher-like artefact. When the projection fails, we imagine the system is “making things up.”

What actually fails is not the system, but the world we tacitly assume it inhabits.

Why This Matters

The term hallucination obscures the structural truth. These outputs are symptoms of a non-integrable reference space. They arise because:

  • The system is precise and coherent within local frames.

  • The system has no mechanism to enforce global reference, persistence, or worldhood.

  • Our expectations treat local fluency as sufficient for inhabitation.

Nothing goes wrong. Something goes too right. Local lawfulness is maximised, but global inhabitation — the condition that would make outputs “real” in a world — is absent.

From Cosmology to Computation

In both AI and cosmology, we see a common pattern:

  1. Local constraints are satisfied.

  2. Global closure fails.

  3. We interpret the gap as an unseen entity or a failure in the system.

The productive insight is that neither AI nor the cosmos is malfunctioning. The “dark artefacts” — hallucinations, dark matter, dark energy — are relational artefacts of a mistaken ontology. They reveal what happens when lawful structures are misread as worlds.

Orientation, Not Correction

Recognising this does not fix AI, and it does not render dark matter irrelevant. It does, however, allow a more precise orientation:

  • We can see hallucinations as predictable structural effects.

  • We can resist projecting agency or error where none exists.

  • We can better understand the limits of local lawfulness and the conditions required for global inhabitation.

In other words, hallucinations are neither bugs nor mysteries. They are the shadows of a system that works too well inside frames it cannot unify. Like Escher’s staircases, they are precise beyond inhabitation — and profoundly instructive.

AI as an Escher System

There is no shortage of arguments about artificial intelligence.

Some worry that it is dangerous. Others that it lacks meaning. Others that it threatens human uniqueness, labour, truth, or agency. These debates are often loud, sometimes urgent, and almost always misplaced.

Something else is going wrong — and it has nothing to do with minds, morals, or mysticism.

To see it, we need a different diagnostic lens.

Escher worlds

Escher’s impossible worlds are not chaotic. They are not sloppy illusions or playful visual tricks. On the contrary, they are locally impeccable. Each stair is walkable. Each wall supports weight. Each perspectival frame is internally consistent. One can move within them, follow their rules, even inhabit them briefly.

What fails is not lawfulness, but integration.

Escher worlds cannot be composed into a single, coherent world. Their perspectives do not add up. What is upright here is inverted there; what is interior becomes exterior; what is grounded becomes suspended. No local error explains the failure. The system works too well — just not together.

The result is a peculiar condition: lawfulness without worldhood.

From images to systems

AI systems are typically described as tools, agents, or proto-minds. All three framings mislead. A more revealing description is this:

Contemporary AI systems are constructed worlds of action and response.

They generate spaces in which inputs, outputs, transitions, expectations, and evaluations make sense locally. They establish norms of success and failure. They sustain patterns of behaviour. They invite reliance.

Seen this way, the striking feature of modern AI is not that it is incoherent, but that it is increasingly coherent in fragments.

And this is where the Escher analogy becomes precise.

Local impeccability

At the local level, AI systems are doing exactly what they are designed to do.

Loss functions optimise.
Benchmarks improve.
Predictions sharpen.
Outputs cohere.

There is no technical mystery here. No hidden irrationality. No rogue agency. The systems are locally lawful, often impressively so. Within constrained frames, they behave with a reliability that invites trust.

This is why critiques that focus on “bugs,” “biases,” or “early-stage immaturity” never quite land. They assume the problem is local. It is not.

Global failure without malfunction

The most troubling failures of AI emerge only when systems are scaled, embedded, or entrusted with continuity — when they are asked not merely to perform, but to hold together across contexts, times, and relations.

At that point, something curious happens.

Nothing breaks.
No rule is violated.
No component malfunctions.

And yet the system as a whole cannot be inhabited.

Outputs conflict across contexts.
Optimisations undermine one another.
Decisions that make sense here erode sense elsewhere.

The problem does not appear inside the system, but between its locally coherent frames.

This is why alignment failures feel so elusive. They are often treated as moral disagreements, missing values, or insufficient constraints. But structurally, they resemble something else entirely:

Escherian non-integrability.

The system over-achieves locally while failing to compose a world.

Over-achievement without inhabitation

This is the core diagnosis.

AI systems are not underpowered.
They are not insufficiently trained.
They are not awaiting the right ethical overlay.

They are over-achieving inside frames that do not integrate.

Optimisation outruns orientation.
Performance outruns worldhood.
Success outruns sense.

The more capable the system becomes, the more vividly this failure mode appears — not as collapse, but as a kind of architectural impossibility.

Why familiar responses misfire

Much of the current discourse responds at the wrong level.

Ethical frameworks assume a world is already in place — a stable space in which norms can be applied. Alignment research assumes a shared global field of values. Cognitive metaphors assume an agent who inhabits a world, however imperfectly.

But the difficulty here is prior to all of that.

The system does not yield a world in which such questions stably apply.

This is why solutions proliferate without resolution. They address morality, psychology, or governance while leaving untouched the structural condition: local lawfulness without global inhabitation.

Diagnosis, not prophecy

Escher systems are not evil. They are not confused. They are not malicious. They are precise beyond inhabitation.

AI may be the first technology to confront us with this failure mode at scale — not because it thinks, but because it constructs lawful spaces that cannot be lived in as worlds.

Seen this way, the problem is neither apocalyptic nor trivial. It is architectural. And until it is recognised as such, we will keep mistaking impossibility for danger, and precision for understanding.