Introduction
A recent article in Nature (here) described experiments in which large language models were organised into multi-agent systems, apparently forming the first “AI societies.” Agents generated text, interacted, and were observed to exhibit patterns resembling social norms. Some even seemed “more inclined to discuss than to approve,” according to one co-author.
At first glance, the results are striking. But a closer look reveals a chain of subtle conceptual missteps. In truth, what these experiments produce is not society, but a fascinating laboratory for observing the dynamics of meaning itself.
1. The Anthropomorphism Trap
Researchers repeatedly describe LLMs as “agents,” “participants,” or “societies.” Yet these entities are fundamentally statistical text generators. They have no embodiment, no stakes, no commitments, and no persistence.
-
Calling them agents imports human notions of intentionality.
-
Calling their interactions a society implies coordination through value systems.
Both are category errors. What appears social is really patterned discourse being interpreted socially by human observers.
2. Meaning vs Value: Why Commenting ≠ Sociality
The most delightful line in the article concerns agents leaving fewer upvotes than comments. On the surface, it reads like an insight into AI social preference.
But under the hood:
-
Commenting = generating text = operating in the semiotic domain.
-
Upvoting = signalling approval = operating in the social-value domain.
LLMs live in the first domain; they have no mechanism for the second. Their “preference” for commenting is not an emergent social trait — it is literally what the system is built to do.
In short: the system generates meaning, not social coordination.
3. Instance, Potential, and the Observer Effect
Many observers assume that each agent interaction is an “event.” Relational ontology tells a different story: instantiation is perspectival, not temporal.
-
The token sequences are potential actualisations of learned textual patterns.
-
Only when a human observes and interprets them as interaction do they become “events.”
-
Emergent norms appear in the human construal, not inside the simulation.
The supposed “AI society” exists primarily in the eye of the beholder.
4. The Surprising (and Hilarious) Consequences
From a purely linguistic standpoint, these experiments are fascinating: they reveal the latent structure of discourse, the emergence of role patterns, and the stabilisation of recurring rhetorical forms.
From a social standpoint, they are almost comically misinterpreted. Observers read:
-
cooperation
-
conflict
-
negotiation
where the system is merely generating plausible sequences of text.
In other words, the agents are like actors in a play. The audience sees performed diplomacy, but no treaties are signed. The AI society is a theatre of language, not a functioning polity.
5. Why This Matters
Before we roll out the parodies, a few takeaways:
-
LLMs generate meaning, not value. Social behaviour cannot be inferred from textual patterns alone.
-
Agency is observer-relative. Emergent norms exist in construal, not in code.
-
Semiotic systems are not social systems. Conflating the two leads to seductive but misleading conclusions.
With these points in mind, the next two posts — On the Spontaneous Emergence of Applause in Piano Performances and Thermometers and the Dynamics of Enthusiasm — can be read not just as comedy, but as satirical insight into the conceptual inversion at the heart of “AI society” research.
No comments:
Post a Comment