AI systems do not hallucinate in any ordinary sense. There is no perceptual failure, no misfiring neuron, no broken rule. Each transition — each token, each prediction, each probability — is locally impeccable. The system is doing exactly what it is designed to do.
And yet, from the perspective of a human world, it can produce outputs that seem false, inconsistent, or invented. Citations appear where none exist. Explanations misfire. Facts dissolve.
This is not error. It is a structural effect.
Local Lawfulness, Global Non-Integrability
The phenomenon is exactly analogous to what we described in cosmology when discussing dark matter and dark energy. There, equations work perfectly at every point, observations are stable, but the global model fails to close. Physicists respond by positing an invisible substance — dark matter — to patch the gap between lawfulness and worldhood.
In AI, the “hallucination” label functions in a similar way. We project the expectation of a globally coherent reference system onto a locally lawful, Escher-like artefact. When the projection fails, we imagine the system is “making things up.”
What actually fails is not the system, but the world we tacitly assume it inhabits.
Why This Matters
The term hallucination obscures the structural truth. These outputs are symptoms of a non-integrable reference space. They arise because:
-
The system is precise and coherent within local frames.
-
The system has no mechanism to enforce global reference, persistence, or worldhood.
-
Our expectations treat local fluency as sufficient for inhabitation.
Nothing goes wrong. Something goes too right. Local lawfulness is maximised, but global inhabitation — the condition that would make outputs “real” in a world — is absent.
From Cosmology to Computation
In both AI and cosmology, we see a common pattern:
-
Local constraints are satisfied.
-
Global closure fails.
-
We interpret the gap as an unseen entity or a failure in the system.
The productive insight is that neither AI nor the cosmos is malfunctioning. The “dark artefacts” — hallucinations, dark matter, dark energy — are relational artefacts of a mistaken ontology. They reveal what happens when lawful structures are misread as worlds.
Orientation, Not Correction
Recognising this does not fix AI, and it does not render dark matter irrelevant. It does, however, allow a more precise orientation:
-
We can see hallucinations as predictable structural effects.
-
We can resist projecting agency or error where none exists.
-
We can better understand the limits of local lawfulness and the conditions required for global inhabitation.
In other words, hallucinations are neither bugs nor mysteries. They are the shadows of a system that works too well inside frames it cannot unify. Like Escher’s staircases, they are precise beyond inhabitation — and profoundly instructive.