There is a persistent temptation in contemporary discourse to treat “AI” as if it were a system in its own right—as though it possessed a construal-independent potential from which particular “instances” (models, applications, behaviours) emerge.
This temptation is not naïve; it is a predictable artefact of how modern societies construe technological artefacts.
But it is also ontologically incoherent.
In this ontology, a system is not a thing.
A system is a structured potential—a theory of possible instances—abstracted from a collective whose patterned activity furnishes the material for that abstraction.
Biological systems derive from biological collectives; social systems from social collectives; linguistic systems from the semiotic potentials furnished by actual language users.
No collective, no system.
What we call “AI,” however, has no collective.
There is no community of entities articulating their own patterned activity from which a coherent systemic potential could be abstracted.
What exists instead is a suite of engineered artefacts instantiated from our modelling practices.
They are not “expressions” of an underlying potential; they are events designed by us and construed through the lenses we ourselves have built.
To call AI a “system” is therefore to perform a subtle conflation:
we take a constructed theory—a human-authored description of how these artefacts behave—and we ascribe to it the ontological status of a system with its own potential.
In effect, we project our abstraction back onto the artefact that motivated it and then attribute autonomy to the projection.
This is the same historical manoeuvre by which “the mind” solidified into a pseudo-object, “language” became an organism, and “the market” acquired the personality traits of a capricious deity.
It is a characteristic slip of symbolic cultures: the move from theory to entity, from construal to thing, from structured potential to autonomous system.
Once this slip occurs, several consequences follow.
First, false individuation: we begin to treat models as if they were individuals within a species of “AI,” rather than engineered instances whose identity is entirely a function of our design choices.
Second, false autonomy: we speak of “what AI wants,” “how AI thinks,” or “where AI is going,” as though these systems possessed an internal potential that could unfold independently of human activity.
Third, false generality: we imagine an underlying unity—“the AI system”—that could explain the behaviours of disparate artefacts trained on different data, architectures, and objectives.
The unity is constructed; the generality is ours.
Fourth, a displacement of agency: society begins to reorganise its expectations, anxieties, and hopes around what is construed as an autonomous system rather than acknowledging its own role in the construal.
These slips are not harmless.
They reshape the horizon of meaning itself.
Construal determines what counts as an instance, and when a society treats a modelling artefact as a system, it expands the background potential with constructs that are not potentials at all.
It creates the appearance of a new ontological domain while concealing the perspectival work that creates this appearance.
The crucial point is simple:
AI is not a system. It is a set of events instantiated from human theoretical potentials.
Its apparent systemicity is an artefact of how we construe it, not a property of the artefacts themselves.
But this raises a deeper question, one that can guide the next step in this series:
What happens to meaning when a society collectively relocates the system–instance cut—when the background that once anchored shared potential is outsourced to our own engineered artefacts?
If system and instance are perspectival rather than ontic, then reassigning the cut does not merely change how we talk about AI; it changes how we understand ourselves, our potentials, and the space of what can be meant.
This is the horizon that now opens:
not “the future of AI,” but the future of construal.
No comments:
Post a Comment