Few metaphors have shaped modern thought more deeply than the idea that the brain is a computer.
The metaphor appears irresistibly powerful:
- inputs are received
- information is processed
- representations are manipulated
- outputs are generated
This picture has been extraordinarily productive technologically.
But ontologically, it carries assumptions that relational ontology cannot accept.
The brain is not a computer.
Not because computation never occurs in any useful sense, but because the computational metaphor fundamentally misdescribes the nature of neural organisation, cognition, and consciousness.
The hidden metaphysics of computation
Computational models depend on a particular ontology.
They assume:
- discrete states
- symbolic representations
- rule-governed transformations
- centralised interpretability
- and separable informational units
Even connectionist models often preserve much of this architecture beneath distributed implementations.
Meaning is thereby treated as encoded content inside the system.
Relational ontology rejects this structure at its foundation.
Why representation becomes unstable
The representational model depends on a separation between:
- world
- representation of world
- interpreter of representation
But each term destabilises under examination.
Classical cognitive science often quietly inserts:
- homunculi
- decoding systems
- symbolic interpreters
- or implicit observers
somewhere into the architecture.
The result is an infinite regress of interpretation.
Who interprets the interpreter?
Relational ontology dissolves the problem by refusing its starting assumption.
Neural activity is not representation of a pre-given world.
It is:
dynamic relational actualisation within coupled organism–environment constraint fields
The brain does not contain models of reality.
It participates in ongoing relational coordination with the world.
The mistake of informational containment
Computational metaphors encourage the idea that information is stored inside the brain.
But this assumes:
- stable symbolic encoding
- retrievable representational content
- and internal semantic containers
Yet neural systems are:
- massively distributed
- context-sensitive
- dynamically reconfiguring
- temporally unstable
- and deeply dependent on embodiment and environment
What appears as “stored information” is more accurately:
the capacity for certain relational activation patterns to be re-actualised under appropriate constraint conditions
Memory is not archival storage.
It is stabilised potential for constrained re-coordination.
The brain as dynamic selectional field
This is where Gerald Edelman’s Theory of Neuronal Group Selection becomes crucial.
TNGS rejects rigid computational architecture in favour of:
- neuronal populations
- developmental differentiation
- dynamic selection
- reentrant signalling
- and context-sensitive coordination
Even here, however, relational ontology sharpens the picture further.
Neuronal groups are not computational modules.
They are:
transiently stabilised relational activation regimes within a dynamically constrained neural field
The brain is not executing algorithms over representations.
It is continuously generating and stabilising patterns of relational coordination under embodied environmental coupling.
Why input/output models fail
Computational systems are typically organised around:
- input channels
- internal processing
- output behaviour
But biological cognition does not cleanly decompose this way.
Perception already depends on:
- bodily state
- prior neural organisation
- environmental coupling
- ongoing action
- and contextual constraint
There is no neutral input stream entering a passive processor.
Perception itself is:
active relational co-actualisation between organism and environment
Likewise, action is not output following computation.
It is part of the same continuous coordination process.
The organism does not first model the world and then act.
It acts within an ongoing relational field from which perceptual coherence simultaneously emerges.
The collapse of central control
Computational metaphors often imply central management:
- executive systems
- controllers
- supervisory modules
- global workspaces
Even when distributed architectures are proposed, a hidden centre frequently returns somewhere in the explanatory structure.
But neural organisation appears profoundly non-centralised.
There is:
- no single interpretive location
- no internal observer
- no master representation
- no unified computational core
What exists instead is:
distributed recursive coordination across dynamically interacting neural populations
Coherence emerges from relational coupling itself, not from centralised oversight.
Reentry versus transmission
This is not simple information transfer.
It is:
ongoing mutual constraint coordination across distributed neural regions
Relational ontology deepens this insight.
Reentry is not communication between pre-formed modules.
It is:
recursive co-actualisation through which transient neural coherence structures emerge and stabilise
Neural organisation is therefore not hierarchical computation.
It is distributed relational resonance under constraint.
Why meaning cannot be computation
Perhaps the deepest failure of the computational metaphor concerns meaning itself.
Meaning arises only within systems of construal.
And construal is relational, perspectival, and contextually actualised.
A computational system can transform symbols indefinitely without any intrinsic semantic relation emerging.
This is why meaning cannot be reduced to information processing.
Meaning is not contained in symbols.
It is actualised through relational construal within semiotic systems.
The brain participates in conditions for construal, but meaning itself is not computational content.
This distinction becomes essential for avoiding collapse between:
- neural value coordinationand
- symbolic semiosis
Value without representation
Relational ontology can preserve this insight while avoiding representational reduction.
Value systems do not encode meanings.
They structure:
differential stabilisation tendencies within neural coordination dynamics
But this is not yet meaning.
It is pre-semiotic constraint organisation.
This distinction matters enormously.
Consciousness without inner theatre
But once representation collapses, so does the theatre.
Consciousness is not:
- internal display
- symbolic monitoring
- or computational self-inspection
It is:
dynamically stabilised construal actualisation within recursively coupled neural-relational systems
No observer stands behind the experience.
The coherence of the experience is itself the event.
Why computation still appears useful
At this point, a clarification is necessary.
Relational ontology does not deny that computational descriptions can be useful.
Brains can often be modelled computationally for specific purposes.
But usefulness does not imply ontological adequacy.
A weather system can be simulated computationally without being fundamentally computational in nature.
The map is not the ontology.
The deeper reversal
Relational ontology reverses this direction.
Symbolic systems themselves emerge from:
- embodied relational coordination
- neural constraint dynamics
- environmental coupling
- and eventually social semiotic construal systems
Meaning does not arise from computation.
Computation is a late abstraction extracted from already meaningful relational practices.
Closing the computer
The brain is not a machine manipulating internal representations of an external world.
It is a dynamically evolving field of relational coordination:
- embodied
- environmentally coupled
- recursively reentrant
- constraint-structured
- and continuously actualising transient patterns of coherence
Neural organisation is not computation over symbols.
It is the ongoing stabilisation of relational dynamics under biological and environmental constraint.
The computer metaphor survives because it compresses certain functional regularities.
But beneath that compression lies something far stranger:
No comments:
Post a Comment