Wednesday, 4 February 2026

Explanatory Strain and the Two-Slit Experiment: 4 Instantiation Without Mystery

The previous posts have cleared the ground. We have seen why talk of particles “knowing” things is a symptom of explanatory strain, how appeals to past and future electrons illicitly temporalise what is not a temporal relation, and why interference patterns are not built, coordinated, or produced by events acting together across time.

What remains is the positive account. If patterns are not effects, and if events do not coordinate to produce them, then what is the relation between a single electron detection and the interference pattern it exemplifies?

The missing concept is instantiation.

In the relational ontology developed in our recent work, a system is not first a collection of things that later happens to display regularities. A system is a structured potential: a theory of possible instances defined relative to a particular construal. The experimental arrangement—the slits, their geometry, the detection screen—does not merely provide a backdrop against which electrons behave. It defines the space of possible outcomes that any detection can actualise.

On this view, what is often called the “wavefunction” is not a physical object evolving in time, nor a mysterious entity that somehow accompanies each particle. It is a formal description of the structured potential associated with a given experimental construal. It specifies, in abstract terms, how possible detection events are distributed across the available space.

An individual electron detection is an instance of that potential. It is not caused by the wavefunction, nor does it emerge from it as a later stage of a process. Rather, it is a perspectival cut: the point at which the system-as-theory is actualised as a system-as-instance.

This cut is not something that happens in time in addition to the detection event. It just is the detection event, viewed as an instance of a theory-relative structure. Nothing flows, collapses, or propagates. There is no need to imagine information being consulted or decisions being made. The relation between potential and instance is constitutive, not causal.

Once instantiation is understood in this way, the so-called “central mystery” of the two-slit experiment dissolves. Each electron does not need to know how many slits are open, because the number of slits is already built into the structured potential that defines what counts as a possible detection. Each electron does not need to know what other electrons have done, because no detection ever responds to any other. And there is no influence from the future, because nothing about the explanation depends on temporal order at all.

What appears mysterious only does so because we persist in asking the wrong kind of question. We ask how events manage to coordinate themselves so as to produce a pattern, when we should be asking how events come to count as instances of a structured situation in the first place.

Seen through this lens, the two-slit experiment is not a demonstration of particles behaving strangely, nor of reality defying common sense. It is a demonstration of the limits of an ontology that recognises only events and processes, and has no place for structured potential as a first-class explanatory category.

The interference pattern does not need to be explained as something that happens. It needs to be recognised as something that is presupposed by every detection event as a condition of its intelligibility. Once that shift is made, the anthropomorphic metaphors fall away, the temporal paradoxes evaporate, and the experiment loses its air of mystery—without losing any of its significance.

What remains is not an enigma, but a demand: that we take seriously the relations between theories and instances, between construal and phenomenon, and between potential and actualisation. The two-slit experiment does not force us into mysticism. It forces us to get our ontology right.

Explanatory Strain and the Two-Slit Experiment: 3 Patterns Are Not Built

The lingering intuition behind many explanations of the two-slit experiment is that interference patterns must somehow be produced by electrons acting together across time. If electrons arrive one by one, the thought goes, then the pattern must be gradually constructed out of their accumulated impacts. Something must be coordinating those impacts so that, taken together, they trace the familiar bands.

This picture is deeply intuitive—and deeply mistaken.

To see why, it helps to ask a deceptively simple question: what kind of thing is an interference pattern?

An interference pattern is not an event. It does not occur at a moment, nor does it have a location independent of the detections from which it is inferred. Nor is it a process unfolding in time. Nothing happens to a pattern as electrons are detected. What happens are detections; the pattern is a way of characterising their distribution.

This distinction matters. Events happen. Processes unfold. Patterns, by contrast, are recognised.

When we say that an interference pattern “emerges” as electrons strike the screen, we are not describing something coming into being in the world. We are describing a change in what is visible to us as observers with access to many instances. With only a handful of detections, the distribution is opaque; with many, it becomes legible. The world has not changed in kind. Our construal has.

The temptation to think of patterns as built is reinforced by everyday examples where accumulation genuinely produces a new object: a wall built brick by brick, a heap formed grain by grain. In such cases, each contribution alters the state of the system. The later state depends causally on the earlier ones. But electron detections do not stand in this relation to an interference pattern. No detection alters the conditions governing subsequent detections. Nothing about the experimental arrangement is modified by an electron striking the screen.

This is why talk of coordination is misplaced. Coordination presupposes interaction: signals exchanged, adjustments made, constraints updated. None of this occurs in the two-slit experiment. There is no mechanism by which electrons could coordinate even if we wanted to posit one, because there is nothing for them to coordinate with. Each detection is governed by the same constraints, regardless of what has happened before.

Once this is recognised, a further confusion comes into view. The idea that patterns are produced encourages us to treat the interference pattern as an effect for which electrons are jointly responsible. But electrons are not jointly responsible for anything here. Each detection stands alone. The pattern is not an outcome of their cooperation; it is a description of their collective distribution.

Put bluntly: electrons do not make patterns. Patterns make electrons intelligible.

This reversal is difficult to accept because it runs against a deeply ingrained explanatory habit. We are used to explaining global regularities by appealing to local interactions. When that strategy fails, as it does here, we are tempted to invent exotic interactions—non-local influence, retrocausation, hidden communication—to rescue the habit. The rhetoric of mystery thrives in this gap.

But the failure lies not in the phenomena, and not in the physics. It lies in the assumption that explanation must proceed from events to patterns, rather than from structured constraints to instances.

Once we let go of the idea that patterns are built or produced, the two-slit experiment loses much of its air of paradox. There is no longer any need to imagine electrons coordinating across time, consulting histories, or anticipating futures. There remains only a question that has so far been left implicit: what kind of thing is the structure that governs these distributions, if it is neither an event nor a process?

Answering that question requires a shift in how we think about systems and instances, potential and actualisation. In the next post, we will argue that the missing concept is instantiation itself—not as a process unfolding in time, but as the relation that makes individual events intelligible as instances of a structured potential.

Explanatory Strain and the Two-Slit Experiment: 2 One Electron at a Time: How Temporality Gets Smuggled In

In the passage quoted in the previous post, the most striking claim is not that electrons somehow “know” the experimental configuration. It is that each electron is said to know what happened to the electrons that went before it and the ones that would come after it.

This appeal to past and future electrons is doing crucial explanatory work. It is also doing illicit work.

At first glance, the move seems harmless. After all, the interference pattern only becomes visible once many electrons have been detected. It is tempting to describe the pattern as something that is gradually built up over time, and then to treat that temporal accumulation as something each electron must in some sense be responding to. But this temptation rests on a quiet shift in explanatory focus: from events to distributions, and then back again.

A single electron detection is a local event. It occurs at a particular time and at a particular location on the detection screen. It has no duration beyond that event, and no access to any other event. Nothing about the physics of the situation licenses the idea that one detection is influenced by another, let alone by detections that have not yet occurred.

The interference pattern, by contrast, is not an event at all. It is a statistical regularity visible only when many detection events are considered together. Crucially, it is not located at any particular moment in time. The pattern is invariant across trials, not produced by their temporal order.

When Gribbin invokes earlier and later electrons, these two distinct kinds of thing—events and distributions—are tacitly conflated. The stability of the distribution across many runs is redescribed as though it were a temporal relation between individual electrons. Past detections are treated as if they exert an influence, and future detections as if they are somehow already in play.

This is where the sense of paradox intensifies. If electrons are truly fired one at a time, how could a later electron be affected by an earlier one? And how could it possibly be affected by electrons that have not yet been detected? The experiment begins to look as though it requires retrocausation, memory, or some kind of non-local coordination across time.

But none of this follows unless we first accept the mistaken premise that the interference pattern is something produced by electrons acting across time.

The pattern is not built up in the way a heap of sand is built grain by grain. Nothing accumulates in the world as electrons strike the screen, except marks on a detector. What accumulates is our evidence of a distribution that was already defined by the experimental arrangement.

To put the point sharply: no single electron ever contributes to an interference pattern. The pattern is not an effect to which electrons add their share. It is a property of the experimental construal that governs how electron detections are distributed across many instances.

Once this distinction is kept firmly in view, the appeal to past and future electrons loses its grip. There is no need to imagine electrons consulting a history of previous events or anticipating events yet to come. Each detection is constrained in exactly the same way, regardless of when it occurs, because the constraint does not operate through time.

The illicit temporalisation of the explanation arises from treating iteration as interaction. Repeating the same experiment many times is mistaken for a process unfolding in time, rather than for the repeated instantiation of the same structured situation. Temporal order becomes explanatorily salient only because the pattern is mischaracterised as something that emerges from the sequence itself.

This confusion sets the stage for the most persistent misreading of the two-slit experiment: the idea that electrons must somehow coordinate their behaviour across time in order to produce the observed result. In the next post, we will argue that this coordination picture is not merely unnecessary, but conceptually incoherent. Patterns do not need to be built, coordinated, or produced. They need only to be instantiated.

Explanatory Strain and the Two-Slit Experiment: 1 Why Do Particles ‘Know’ Things?

“Once again, the electrons ‘knew’ how many slits were open … Each electron seemed to ‘know’ not only what the exact experimental set-up was at the time it made its flight through the apparatus, but also what happened to the electrons that went before it and the ones that would come after it.”
— John Gribbin, Six Impossible Things

It is worth lingering over this sentence, because an extraordinary amount of conceptual work is being done by a single word: knew.

Gribbin’s use of the term is clearly metaphorical. Electrons do not possess minds, memories, or beliefs. Yet the metaphor is not ornamental. It is explanatory. It is introduced at precisely the point where ordinary physical description runs out, and where something must bridge the gap between two claims that Gribbin wishes to hold simultaneously: that electrons are detected one at a time, and that their detections nevertheless conform to a stable interference pattern determined by the experimental configuration.

The word knew functions as that bridge. It allows Gribbin to gesture at a relation between a single detection event and a global experimental structure without specifying what kind of relation this is. In doing so, it quietly imports a familiar cognitive schema—access to information, awareness of conditions, sensitivity to alternatives—into a domain where none of those notions properly belong.

To see how much is being smuggled in, consider what it would actually mean for an electron to know how many slits were open.

Knowing, even in its weakest everyday sense, presupposes at least three things:

  1. A distinction between alternatives — there must be something it is possible to be wrong about.

  2. Access to a comparison class — the knower must be situated such that different possibilities can, in principle, be discriminated.

  3. A standpoint from which conditions are apprehended — knowledge is always knowledge for someone or something.

None of these conditions can be made sense of at the level of a single electron detection. There is no standpoint, no access, no discrimination of alternatives. There is only an event: a localised detection at a particular position on a screen.

Why, then, does the metaphor feel so natural?

The answer lies in the specific explanatory pressure created by the two-slit experiment. When electrons are fired one at a time, there is a strong temptation to treat each detection as an isolated occurrence whose properties must be explained solely in terms of what happens at that moment. But the observed distribution of detections is not arbitrary. It is constrained by the entire experimental arrangement: the number of slits, their geometry, and the measurement set-up as a whole.

Gribbin’s sentence attempts to register this constraint. The problem is that, lacking a clear account of how a single event can legitimately be described as constrained by a global structure, the explanation slips into anthropomorphic terms. The electron is said to know the configuration, because knowing is the everyday concept we use when an individual outcome reliably tracks a larger set of conditions.

This move has several immediate consequences.

First, it recasts structural constraint as epistemic access. Instead of asking how the experimental configuration defines the space of possible outcomes, we are invited to imagine an electron that somehow consults that configuration before deciding where to land.

Second, it introduces a temporal distortion. Gribbin does not merely say that the electron knows the current set-up; he says it knows about electrons that came before and electrons that will come after. Here the metaphor does even heavier lifting. What is really being gestured at is the stability of a distribution across many trials, but that stability is redescribed as though it were the result of memory and anticipation on the part of each individual event.

Finally, the metaphor reverses the direction of explanation. Instead of the interference pattern being understood as a property of the experimental construal that governs individual detections, the pattern begins to look like something produced by electrons somehow coordinating their behaviour across time.

At this point, the familiar rhetoric of mystery takes hold. How could a single electron possibly know all this? How could it be influenced by events that have not yet occurred? The sense of paradox is real—but it is a paradox generated by the metaphor itself.

In the posts that follow, we will argue that nothing in the two-slit experiment requires us to attribute knowledge, memory, or foresight to electrons. The appearance that it does arises from a deeper confusion about how patterns relate to events, and about how potential structures are instantiated in particular cases. Once those confusions are brought into view, the metaphor of knowing can be set aside—not because it is poetic, but because it is doing the wrong kind of explanatory work.

Explanatory Strain and the Two-Slit Experiment: Introduction

The two-slit experiment is often described as the “central mystery” of quantum physics. Not because its experimental results are in doubt, but because explaining those results places extraordinary pressure on our ordinary ways of talking about systems, events, and causation. Where explanation strains, metaphor rushes in to compensate.

This short series takes as its point of departure a familiar passage from John Gribbin’s Six Impossible Things, in which single electrons passing through a two-slit apparatus are said to “know” how many slits are open, what electrons have done before them, and what electrons will do after them. Gribbin is not being careless. On the contrary, the passage is representative of a widespread and well-intentioned explanatory strategy in popular (and sometimes professional) accounts of quantum phenomena. Precisely for that reason, it is philosophically revealing.

The aim of the series is not to dispute the physics, nor to propose an alternative interpretation of quantum mechanics. Instead, it treats Gribbin’s explanation as a case study in explanatory strain: a moment where inherited metaphysical assumptions are no longer adequate to the phenomena being described, and where anthropomorphic and cognitive metaphors are pressed into service to conceal that inadequacy.

Across the posts that follow, we examine what these metaphors are doing, what conceptual work they are attempting to perform, and why they are so persistently tempting. We then show how the apparent mystery dissolves once we abandon the assumption that events must somehow build or coordinate patterns across time, and instead treat interference as the repeated instantiation of a structured potential defined by the experimental construal.

This analysis is continuous with the concerns of our recent work on relational ontology, but it is deliberately local and textually grounded. The two-slit experiment provides a compact site in which issues of potential, instantiation, temporality, and explanation converge. By staying close to a single, widely circulated explanation, the series aims to clarify not only what goes wrong in popular accounts of quantum phenomena, but why those accounts go wrong in the first place.

Meta‑Synthesis: Relational Cuts, Theoretical Pathologies, and the Fate of Explanation

Across the series — When Physicists Talk About Reality, Relational Cuts in Modern Physics, A Theory of Theoretical Pathology, The Ontology of Explanation, From Model to World: The Vanishing of the Cut, Why Interpretations of Quantum Mechanics Never Converge, and String Theory as a Limit Case — a single structural diagnosis comes into focus.

What appears, on the surface, as a collection of domain‑specific problems is revealed as a family of related failures in how theories relate to phenomena.


1. The Central Diagnostic: The Missing Cut

At the heart of every series lies the same structural absence: the failure to maintain a cut between theory and instance.

When this cut is maintained, theories function as structured spaces of possibility — intelligible precisely because they are not the world. When it collapses, models, formalisms, and symbolic systems begin to masquerade as reality itself.

This is not a local error. It is a generative condition for entire research programmes.


2. Surrogates for Instantiation

Across domains, the loss of instantiation produces compensatory mechanisms:

  • Aesthetic criteria (beauty, elegance, naturalness)

  • Mathematical coherence and internal consistency

  • Interpretative narratives layered atop underdetermined formalisms

  • Predictive rhetoric detached from event‑anticipation

These surrogates do not fail accidentally. They succeed structurally — sustaining legitimacy, coordination, and authority in the absence of phenomena.


3. Stable Disagreement and Interpretative Proliferation

Where formalism is underdetermined relative to phenomena, disagreement does not converge.

Interpretations proliferate not because participants are confused, but because each reparcels the same unconstrained object differently. Empirical equivalence stabilises disagreement. Critique fails to land because there is nothing, structurally, for it to dislodge.

This dynamic is clearest in quantum mechanics, but it generalises widely.


4. From Explanation to Derivation

As the cut vanishes, explanation quietly collapses into derivation.

What once aimed to make phenomena intelligible becomes an exercise in formal manipulation. Intelligibility is preserved symbolically, even as contact with the world thins. The appearance of explanation remains, while its relational grounding disappears.


5. Theoretical Pathology as a Systemic Condition

The resulting pathologies are not correctable from within the system that produces them:

  • More data does not restore instantiation.

  • More mathematics does not recover phenomena.

  • Better instruments do not repair a missing cut.

Pathological theories are not wrong in the ordinary sense. They are structurally self‑maintaining.


6. String Theory as the Limit Case

String theory makes these dynamics visible in extreme form:

  • Mathematics functions as a surrogate world.

  • Internal elegance sustains authority.

  • Phenomena are indefinitely deferred.

It is not an aberration, but a clarifying magnification of patterns present elsewhere.


7. What This Work Does — and Does Not — Offer

These series do not propose new interpretations, new theories, or methodological recipes.

They offer something quieter and more demanding:

  • A way of seeing when theory has lost its world.

  • A vocabulary for diagnosing structural failure without polemic.

  • An ontology that appears not as doctrine, but as a condition of possibility for non‑pathological theory.

The aim is not reform, but intelligibility.

And in making these structures visible, the work clears conceptual space — without pretending to fill it.

String Theory as a Limit Case: 5 Lessons for Structural Diagnosis

String theory, as a limit case, illustrates several structural dynamics relevant across theoretical domains:

  1. Surrogate authority: Internal elegance, symmetry, and coherence can sustain legitimacy without empirical grounding.

  2. Symbolic reification: Mathematics can function symbolically as the world itself, with community alignment reinforcing credibility.

  3. Persistence independent of phenomena: The theory maintains coherence and influence even in the absence of instantiated predictions.

  4. Structural patterns across domains: These dynamics are observable in other theoretical contexts — physics, climate modelling, AI, and economic models — where symbolic systems can dominate relational engagement with phenomena.

Understanding string theory in this way provides a diagnostic lens: it helps identify when symbolic systems are operating as self-contained, internally authoritative structures, rather than relational theories of possible instances. Recognising these patterns allows analysts to navigate theoretical landscapes more clearly, highlighting the boundary between formalism and the world it purports to describe.

String Theory as a Limit Case: 4 Symbolic Reification

In string theory, the mathematical formalism comes to function symbolically as the world itself. The symbols, equations, and internal structures are treated as if they fully capture reality, even when direct empirical instantiation is absent.

This symbolic reification is reinforced by community dynamics. Internal coherence, technical sophistication, and shared expertise create a social environment in which the theory maintains authority and legitimacy. Confidence in the mathematics substitutes for engagement with physical phenomena, and the symbolic object becomes the primary reference point for explanation, prediction, and discourse.

Recognising this reification is crucial for structural diagnosis: string theory illustrates the extreme case in which a symbolic system sustains itself independently of its relational connection to the world, highlighting patterns of authority, persistence, and internal elegance that recur across other theoretical domains.

String Theory as a Limit Case: 3 Phenomenon Deferred

String theory exemplifies a situation where the connection to physical phenomena is indirect or deferred. Many of its predictions lie beyond current experimental reach, and empirical confirmation of its structures remains elusive.

As a result, the theory functions primarily within its symbolic and formal framework. Predictions are conditional, often relying on complex assumptions or untested parameters, and their validation is postponed indefinitely. The phenomena that the theory aims to describe are not directly instantiated, yet the theory maintains coherence, explanatory power, and community authority within its symbolic domain.

This deferral illustrates a structural dynamic: theory remains meaningful and operational even without direct engagement with phenomena. The distinction between the symbolic object and the world it purports to describe is essential for understanding string theory as a limit case of surrogate authority.

String Theory as a Limit Case: 2 Internal Elegance as Authority

In string theory, internal elegance, symmetry, and unification serve as surrogates for empirical grounding. Where direct observation or instantiation is unavailable, the formal beauty of the theory provides a source of authority, legitimacy, and intellectual prestige.

The physics community reinforces this authority: peer recognition, institutional support, and the prominence of leading researchers amplify confidence in the theory. Elegance and coherence are not merely aesthetic qualities; they function structurally to maintain credibility and coherence within the symbolic system.

This mechanism demonstrates that authority need not depend on relational engagement with phenomena. The theory’s symbolic integrity and the community’s collective valuation provide an internally sustained structure, ensuring persistence, coherence, and influence even in the absence of experimental verification.

String Theory as a Limit Case: 1 The Mathematical Object

String theory presents a highly structured and internally coherent mathematical object. Its complexity, symmetry, and elegance produce a richly interconnected framework capable of unifying diverse physical phenomena within a single formalism.

Crucially, the theory is largely untested empirically. Many of its predictions lie beyond current experimental capability, and direct instantiation of its mathematical structures in observable phenomena is absent. Despite this, the internal coherence and formal beauty confer a form of authority within the physics community.

The mathematical object itself becomes the focal point: its patterns, symmetries, and formal consistency are treated as meaningful in a way that is largely independent of direct engagement with physical instances. Recognising this distinction — between the symbolic structure and the world it purports to describe — is the first step in diagnosing string theory as a limit case of symbolic reification.

Why "Interpretations" of Quantum Mechanics Never Converge: 6 Recognition Without Prescription

The series concludes with a diagnostic perspective: quantum interpretations proliferate and persist not because of conceptual failure, but because of the structural underdetermination of the formalism.

No new interpretation is offered, nor is a prescriptive resolution attempted. The purpose is visibility: to understand why disagreement is stable, why convergence is improbable, and how interpretations function as internally coherent reparcellations of the same unconstrained mathematical object.

Recognition entails appreciating the structural dynamics:

  • Empirical equivalence ensures no interpretation is privileged by observation alone.

  • Internal coherence allows each framework to maintain authority and legitimacy.

  • Structural divergence produces stable disagreement even among experts who share the same data.

By attending to these dynamics, educators, researchers, and philosophers can engage quantum mechanics relationally: understanding the difference between formalism and narrative, between model and world, without conflating mathematical representation with phenomena. Recognition without prescription respects both the complexity of the field and the structural patterns that make the multiplicity of interpretations intelligible.

Why "Interpretations" of Quantum Mechanics Never Converge: 5 Implications for Pedagogy and Practice

The structural proliferation of quantum interpretations has significant consequences for pedagogy, research culture, and scientific communication.

  • Teaching: Students encounter multiple frameworks, each internally coherent, leading to conceptual flexibility but also potential confusion. The multiplicity becomes part of the curriculum, shaping expectations about the nature of theory and reality.

  • Research Culture: Communities often coalesce around preferred interpretations, developing shared terminology, methods, and priorities. Collaboration, citation, and authority are influenced more by interpretive alignment than empirical distinction.

  • Communication: Presenting quantum mechanics to wider audiences requires careful navigation. The interpretive landscape is often simplified or selectively emphasised, reinforcing certain narratives while marginalising others.

The key insight is that interpretive proliferation is structurally inevitable, not a consequence of poor teaching or sloppy research. Understanding this allows educators, researchers, and communicators to engage with quantum mechanics relationally: recognising the formalism, its unconstrained flexibility, and the structural patterns that produce disagreement, without treating the multiplicity as a problem in need of resolution.

Why "Interpretations" of Quantum Mechanics Never Converge: 4 The Limits of Resolution

Convergence among quantum interpretations is structurally impossible. Each interpretation is internally coherent and reproduces all empirical predictions of quantum mechanics. Because the formalism is underdetermined relative to phenomena, evidence alone cannot definitively adjudicate between interpretations.

Attempts to resolve disagreement through experiment, philosophical critique, or technical argumentation routinely encounter structural barriers:

  • Empirical equivalence: All mainstream interpretations generate the same predictions for observable phenomena. No experiment can definitively prefer one over the other.

  • Internal coherence: Each interpretation maintains consistency within its conceptual framework. Challenges from alternative interpretations highlight conceptual differences rather than errors.

  • Structural incommensurability: The narratives, conceptual partitions, and emphasis differ, producing stable divergence even when participants share empirical understanding.

As a result, disagreement is predictably persistent. Resolution is impossible on the terms being used, not because of intellectual failure, but because the structural conditions allow multiple, equally valid reparcellations of the formalism. Understanding these limits reframes debates: the focus shifts from seeking a final interpretation to diagnosing the mechanisms that make convergence structurally unlikely.

Why "Interpretations" of Quantum Mechanics Never Converge: 3 Reparcelling the Unconstrained Object

The core mechanism behind persistent interpretive disagreement lies in the structural underdetermination of quantum mechanics. The formalism is unconstrained relative to phenomena in multiple conceptual dimensions, allowing distinct interpretations to partition, organise, and narrativise the same mathematical object differently.

  • Copenhagen: emphasises classical measurement context and probabilistic outcomes.

  • Many-Worlds: partitions possibilities into branching universes.

  • de Broglie-Bohm: introduces hidden variables and deterministic trajectories.

  • QBism: reframes the formalism as an agent-centred epistemic tool.

Each interpretation is internally coherent, reproduces the predictions of quantum mechanics, and selectively emphasises certain aspects of the formalism while marginalising others. The result is multiple, equally valid frameworks, each representing a different reparcellation of the same unconstrained mathematical object.

Because these partitions do not conflict empirically, disagreement is stable. The interpretations are not competing descriptions of different phenomena, but alternative organisational structures applied to the same formalism. Recognising this reframes the debate: multiplicity is a predictable outcome of structural underdetermination, not a conceptual or empirical failure.

Why "Interpretations" of Quantum Mechanics Never Converge: 2 Structural Proliferation

The proliferation of quantum interpretations is not a random accident; it is a structural phenomenon. Each interpretation reorganises the same formalism differently, privileging certain conceptual distinctions, philosophical commitments, or narrative structures.

Because the underlying mathematics is underdetermined relative to phenomena, multiple internally coherent interpretations can coexist. Disagreement is stable rather than progressive: no single interpretation can definitively replace the others based solely on empirical evidence.

This structural proliferation is reinforced by social and institutional dynamics. Communities coalesce around particular interpretive frameworks, developing specialised language, pedagogical practices, and research emphases. Internal coherence, shared understanding, and rhetorical reinforcement maintain authority even in the absence of decisive empirical adjudication.

The result is a landscape in which multiplicity is the norm. New interpretations emerge, old interpretations persist, and the debate remains ongoing — not because of conceptual failure, but because the formalism permits multiple, equally valid, partitions. Recognising this stability is essential for diagnosing the phenomenon and understanding why resolution is structurally improbable.

Why "Interpretations" of Quantum Mechanics Never Converge: 1 The Landscape of Quantum Interpretations

Quantum mechanics is notorious for the diversity of interpretations that accompany its formalism. These interpretations — Copenhagen, Many-Worlds, de Broglie-Bohm, Objective Collapse, QBism, and others — coexist despite sharing the same predictive and mathematical structure.

The landscape is remarkable not for the novelty of individual interpretations, but for the sheer persistence and proliferation of alternative accounts. Over nearly a century, interpretations have multiplied rather than converged. Each claims conceptual or philosophical superiority, yet none can definitively supplant the others within the existing empirical framework.

This multiplicity is not accidental or merely rhetorical. It reflects a structural feature of quantum mechanics: the formalism is under-determined relative to phenomena, allowing distinct partitions and narrative constructions. The interpretations provide internally coherent ways to organise the same underlying mathematics, each privileging certain conceptual emphases.

Understanding the landscape is the first step toward diagnosing the phenomenon. The proliferation itself is the signal: the focus is not which interpretation is “correct,” but why structural dynamics make convergence improbable. Recognising the multiplicity as a stable structural pattern allows us to analyse interpretations relationally — as reparcellations of a shared formal object — without invoking any new physical postulate or solution.

From Model to World: 6: Structural Rescue or Illusion

Even when the relational cut has vanished, models can appear coherent, effective, and authoritative. This structural rescue is not a miracle of epistemic insight but a product of internal consistency, institutional alignment, and social reinforcement.

Models maintain functionality internally: simulations reproduce themselves, predictions match expectations within the model framework, and communities converge on interpretations. Authority is sustained through repetition, consensus, and technical complexity. To an observer within the system, the model appears robust, predictive, and deeply connected to reality.

Yet this coherence is an illusion from a relational perspective. The model’s outputs are not grounded in phenomena beyond the model itself. Internal consistency and structural elegance are mistaken for empirical fidelity. The relational cut remains absent; the model is still a theory of possible instances, not the world itself.

This structural pattern recurs across domains. Physics, climate science, economics, and AI all demonstrate instances where models operate effectively and authoritatively, even while the cut with the real world is missing. The consequence is subtle but profound: authority and credibility are maintained structurally rather than relationally.

Recognising this dynamic is the culmination of the series. It shows that vanishing cuts are not mere errors; they are systematic, diagnosable phenomena. Understanding them equips practitioners to interpret models more carefully, question assumptions, and remain aware of the distinction between possibility and reality, preserving the essential relational cut for meaningful engagement.

From Model to World: 5 Signs of the Vanishing Cut

Recognising when a model is being treated as the world itself is essential for both analysis and intervention. Several heuristics indicate that the relational cut has disappeared:

  1. Over-reliance on simulation outputs: Decisions or interpretations are driven primarily by model results, rather than relational engagement with the underlying system.

  2. Unquestioned assumptions: Model parameters, structural choices, or scenario boundaries are rarely challenged, and their contingent nature is overlooked.

  3. Predictive authority without validation: The model’s forecasts are treated as trustworthy, even in domains where empirical testing is limited or impossible.

  4. Selective attention to congruence: Instances where the model aligns with observed phenomena are amplified, while discrepancies are rationalized or ignored.

  5. Institutional and rhetorical reinforcement: The model’s outputs are cited as evidence of reality, with credibility maintained through community consensus, repetition, and prestige rather than relational accuracy.

These signs are not always immediately obvious. The model may function effectively within its domain, producing useful guidance or internal coherence. Yet the structural risk remains: the model’s authority is derived internally, not relationally, and the boundary between model and world is obscured.

Recognising these signs allows practitioners and analysts to diagnose vanishing cuts and maintain awareness of the model’s status as a theory of possibilities rather than literal reality. The next part will explore structural rescue or illusion, showing how model authority can be internally sustained even when the cut has disappeared.

From Model to World: 4 The Consequences of Vanishing Cuts

When the cut between model and world disappears, the consequences are both subtle and profound. Authority is conferred on the model itself, while relational engagement with phenomena is diminished.

Key consequences include:

  • Misinterpretation of uncertainty as determinacy: Model outputs are often read as precise predictions rather than conditional possibilities. The structural difference between projection and reality is obscured.

  • Policy and decision-making risks: Decisions are based on model authority rather than empirical evidence or relational validation. In climate, economic, and AI contexts, this can lead to misallocation of resources or flawed interventions.

  • Internal legitimacy substitutes for empirical grounding: Confidence in the model grows through internal coherence, replication of simulations, and community reinforcement, even when the model has not been validated against phenomena.

  • Resistance to critique: Challenges that appeal to empirical limitations are often ineffective. The model’s authority is structural rather than relational, insulated by institutional trust, consensus, or perceived technical sophistication.

Recognising these consequences highlights the diagnostic value of relational cuts. The vanishing cut is not a flaw in logic or methodology; it is a structural phenomenon that repeats across domains. Awareness of the cut, and vigilance against its disappearance, is essential for accurate interpretation, responsible decision-making, and maintaining alignment between theory and world.

The next part will identify signs of the vanishing cut, providing heuristics to recognise when models are being treated as the world itself.

From Model to World: 3 Structural Reification Across Domains

Once the relational cut between model and world vanishes, the structural pattern repeats across multiple domains. Models begin to be treated as the phenomena themselves, conferring authority and guiding decisions independently of empirical instantiation.

Physics: String theory, cosmology, and dark matter models illustrate the phenomenon. Theories generate complex mathematical structures that are treated as factual accounts, despite limited empirical access. Authority arises from internal coherence, elegance, and predictive plausibility rather than direct engagement with phenomena.

Climate Science: Earth system and general circulation models are often read as literal projections of the world’s future. Policy and planning decisions sometimes rely on these outputs as if they were the world itself, with uncertainty and conditionality underemphasised.

Economics: Macroeconomic models simulate national economies, yet forecasts are frequently treated as prescriptive. Parameters are tuned, scenarios projected, and policy recommendations derived as if the model were reality rather than a structured theory of possible outcomes.

AI and Cognitive Modelling: Alignment models, neural simulations, and behavioural predictions are sometimes interpreted as literal descriptions of agent reality. The relational cut is overlooked, leading to overconfidence in predictions and structural misalignment between model and environment.

Across domains, structural reification produces a common pattern: authority is conferred internally, the cut with actual phenomena is obscured, and critiques that appeal to empirical limitations often fail to land. Recognising this pattern allows us to diagnose the broader consequences and prefigure the structural hazards we will examine in the next part.

From Model to World: 2 When the Cut Disappears

The first structural mutation occurs when the relational cut between model and world vanishes. In this state, models are interpreted as if they were the phenomena themselves, and outputs are read as literal depictions rather than conditional projections.

This reification is subtle. Models often produce results that resemble observations, and practitioners may rely on them for planning, prediction, or explanation. The ease with which models map onto real-world data encourages the assumption that the model is reality, rather than a structured theory of possible instances.

Once the cut disappears, several consequences emerge:

  • Conflation of data and model: outputs are treated as measurements rather than modelled possibilities.

  • Misattribution of predictive authority: the model is judged by its internal coherence and apparent accuracy, not by the structural fidelity to actual phenomena.

  • Stability of belief without validation: confidence in the model grows independently of its relational grounding.

This mutation is a structural hazard across domains. By ignoring the cut, communities begin to operate as if the model were reality, leading to misinterpretation, overconfidence, and the subtle erosion of empirical accountability. Recognising when the cut vanishes is the first step toward diagnosing and understanding

From Model to World: 1 What a Model Is, and What It Isn’t

Models are widely used across science, economics, climate research, and AI. Yet the first step in understanding their function is to recognise what they are and what they are not.

A model is a theory of possible instances. It is a structured abstraction that represents potential configurations, behaviours, or dynamics of a system. It does not exist as the system itself; it cannot instantiate phenomena independently. The relational cut — the distinction between model and world — is essential: the model is a tool for exploring possibilities, not a literal depiction of reality.

Classical examples illustrate clarity: a Newtonian planetary model represents the possible trajectories of celestial bodies; an ideal gas law models ensembles of particles under specific assumptions. In both cases, the model predicts or describes behaviour, but it does so as a theory of possibilities, not as the phenomena themselves.

Confusion arises when this cut is ignored or vanishes. When a model is treated as the world itself, outputs are interpreted as literal reality, uncertainties are underappreciated, and the theory’s relational status is obscured. The model ceases to be a diagnostic tool and becomes a surrogate for the world — a structural mutation we will explore across multiple domains.

Establishing this foundational clarity allows us to see the consequences of vanishing cuts, and why recognising the distinction between model and world is critical for accurate interpretation, evaluation, and responsible application of models.

The Aesthetic Turn in Physics: 6 Implications for Theory Evaluation

The aesthetic turn has profound consequences for how theories are evaluated, selected, and sustained. By substituting beauty, elegance, and simplicity for direct empirical engagement, physics has developed a structural pathway for authority that operates independently of instantiation.

Research programs are shaped by aesthetic criteria: theories that are elegant, symmetric, or parsimonious are privileged, while empirically adequate but 'ugly' alternatives are marginalized. Predictive and explanatory virtues may be supplemented or overshadowed by aesthetic approval. As a result, credibility, resources, and institutional support flow along lines defined by structural, rather than strictly epistemic, criteria.

Critique within this context often fails to land. Challenges that appeal to classical standards of prediction or empirical engagement are deflected by the rhetoric and institutional weight of aesthetic consensus. Authority is maintained not through anticipatory success but through repeated affirmation, community alignment, and formal or aesthetic coherence.

Recognizing the implications of the aesthetic turn reveals a broader pattern: structural surrogates — whether predictive, explanatory, or aesthetic — allow scientific authority to persist even when classical engagement with phenomena is partial, delayed, or absent. This insight prepares the way for subsequent analysis of other structural substitutions, including modeling practices, symbolic systems, and methodological norms.

The series concludes here, leaving the reader with a clear understanding of how aesthetic criteria function as a surrogate for epistemic engagement and the subtle yet powerful ways they shape theory evaluation and scientific practice.