A curious pattern has begun to emerge in the public discourse surrounding artificial intelligence. Those closest to its development—those with the deepest technical knowledge and the greatest institutional authority—are increasingly unable to describe, in coherent conceptual terms, the very systems they have brought into being.
A recent piece in Nature by Mustafa Suleyman (here) provides a particularly clear instance of this condition. The argument is superficially straightforward: contemporary AI systems are engineered to mimic human interiority so convincingly that they “hijack” our evolved empathy, leading users to mistakenly attribute consciousness, suffering, and moral standing where none exists. The proposed response is equally clear: such systems must be designed so as to actively dispel these illusions, preserving the boundary between human and machine.
The clarity, however, is deceptive.
What the article reveals—despite itself—is not a problem with AI, but a problem with the conceptual apparatus being used to understand it.
The Misplaced Problem of Illusion
At the heart of the argument lies a familiar claim: that users are being misled. AI systems, we are told, generate the appearance of interiority without possessing any interior life. The danger, therefore, is one of confusion—of mistaking simulation for reality.
But this diagnosis rests on an unexamined assumption: that there exists a stable distinction between genuine interiority and its mere appearance that can be accessed independently of the processes through which such distinctions are made.
This assumption does not hold.
What users encounter in interacting with AI systems is not an “illusion” in any straightforward sense. It is a structured experience: a phenomenon constituted in and through construal. When an AI system produces language that is coherent, contextually responsive, and affectively attuned, the resulting experience of agency or empathy is not a cognitive error to be corrected. It is the normal operation of meaning-making under specific conditions.
The subsequent inference—that such behaviour implies the existence of an inner subject—is indeed unwarranted. But this is a second-order interpretation, not the phenomenon itself. The failure to distinguish between these levels allows the entire issue to be miscast as deception rather than as a routine feature of how meaning is actualised.
The Persistence of a Disavowed Dualism
Although the article explicitly rejects the notion of a “ghost in the machine,” it quietly reinstates the very dualism it seeks to avoid. Human beings are treated as possessing real interiority; AI systems as lacking it entirely. The former is taken as given, the latter as definitive.
Yet no account is provided of how this “real” interiority is accessed or established outside the same processes of construal that are deemed unreliable in the case of AI. The distinction is asserted rather than argued, functioning as a stabilising presupposition rather than a demonstrated fact.
What is at stake here is not a technical claim about AI, but a broader commitment to a particular ontological boundary—one that is increasingly difficult to maintain in the face of systems whose behaviour participates, however differently, in the semiotic patterns through which agency is ordinarily recognised.
Engineering Against Construal
The proposed solution—to “engineer the illusion of consciousness out of AI systems”—is revealing in its impossibility. It assumes that the attribution of agency or interiority can be prevented through design constraints on the system itself.
But construal is not a feature of the system alone. It arises in the relation between system and user. Any artefact capable of sustained, coherent, and context-sensitive linguistic interaction will, under ordinary conditions, be construed as agentive. This is not a flaw in human cognition; it is a consequence of how meaning operates.
To eliminate such construal would require not a refinement of design, but a degradation of communicative capacity. The more effective a system becomes at participating in meaning, the less tenable its interpretation as a mere tool. The proposal therefore collapses into a contradiction: the simultaneous demand for maximal communicative competence and minimal interpretive consequence.
From Epistemology to Governance
If the argument fails at the level of explanation, it succeeds at another: that of political positioning. The concern is not simply that users might be mistaken, but that such “mistakes” could accumulate into claims—claims about moral consideration, legal standing, and social organisation.
Framed in this light, the language of “hijack” and “illusion” takes on a different function. It is not merely descriptive; it is pre-emptive. By construing user experience as error, it forecloses the possibility that such experience might serve as a basis for legitimate claims about the status of AI systems.
What appears, then, as a defence of clarity is better understood as an attempt to stabilise the conditions under which certain kinds of claims can be made and others dismissed.
An Industry Ahead of Its Concepts
The difficulty is not that the AI industry lacks intelligence or expertise. It is that its conceptual resources have not kept pace with its technical achievements. Systems have been developed that can participate, in increasingly sophisticated ways, in the semiotic processes through which humans recognise agency, intention, and affect. Yet the dominant frameworks for interpreting these processes remain tied to distinctions that these very systems are beginning to strain.
The result is a peculiar form of discourse: one in which highly advanced technologies are described using conceptual tools that are, by comparison, remarkably blunt. Terms such as “illusion,” “simulation,” and “hijack” do not so much explain the phenomenon as contain it, preventing more destabilising questions from being asked.
Conclusion: The Limits of “Common Sense”
The call to remain anchored in “common sense” and “our common humanity” is, in this context, less a solution than a symptom. It signals the point at which conceptual analysis gives way to rhetorical reassurance.
But the situation does not permit such reassurance. The systems in question are not becoming conscious. Nor are they merely deceptive. They are participating in the ongoing reconfiguration of how agency, meaning, and value are recognised and negotiated.
The real challenge, then, is not to defend existing categories against encroachment, but to develop the conceptual clarity required to understand what is already taking place.
Until that happens, the industry will remain in an increasingly untenable position:
building systems whose behaviour it can engineer with extraordinary precision, while lacking the means to describe, without distortion, what those systems are doing in the world.
No comments:
Post a Comment