Saturday, 31 January 2026

What Becomes Possible: 1 Intelligence Without Generality

What becomes possible when we stop asking for minds that work everywhere.


1. The demand for generality

Few phrases have done more quiet damage to our thinking about intelligence than general intelligence. The phrase carries an apparently modest ambition—competence across domains—but smuggles in a far stronger claim: that intelligence must be something that transcends situations rather than something that emerges within them.

The demand is so familiar that it rarely announces itself. We hear it whenever intelligence is assessed by its ability to transfer, abstract, or operate “out of context.” We see it in anxieties about artificial intelligence (“it only works in narrow domains”), in educational theory (“will this skill generalise?”), and in cognitive science (“what is the underlying capacity?”).

The shared assumption is simple and rarely examined:

If intelligence is real, it must be portable.

This post argues that this assumption is mistaken—not because intelligence is weak, but because it has been framed in the wrong ontological register.


2. Generality as a representational hangover

The insistence on generality belongs to a representational picture of mind.

On that picture:

  • Intelligence is a thing or capacity inside a system

  • Situations are external inputs

  • Successful action is the application of internal representations to external cases

From here, generality looks like a natural benchmark. If intelligence consists in representations of the world, then better intelligence should involve representations that are more abstract, more context-free, and more widely applicable.

But once the representational picture loosens its grip, the benchmark collapses with it.

If intelligence is not the manipulation of representations but the stabilisation of successful action within a construal, then there is no privileged standpoint from which “generality” could even be measured.

What would it mean to be intelligent everywhere when there is no everywhere—only differently structured situations?


3. Intelligence as situated adequacy

On a relational ontology, intelligence is not a substance, capacity, or faculty. It is a pattern of adequacy that emerges when a system and a situation co‑individuate successfully.

Three consequences follow immediately.

First, intelligence is inseparable from the situation that elicits it. There is no residue left over once the situation is removed.

Second, competence does not transfer; it reforms. What looks like transfer is the successful re‑actualisation of a pattern under a new construal, not the deployment of a context‑free asset.

Third, failure to generalise is not a defect. It is the expected outcome when the conditions that supported adequacy are no longer present.

From this perspective, the question “Is this intelligence general?” becomes not merely difficult, but ill‑posed. It asks for a property that no longer plays any explanatory role.


4. Artificial intelligence and the myth of the missing core

Large language models have made the problem visible in an unusually sharp form.

These systems demonstrate extraordinary competence across a vast range of linguistic situations while simultaneously failing in ways that appear trivial or absurd. The standard diagnosis is familiar: they lack true understanding, world models, grounding, or general intelligence.

But notice what this diagnosis presupposes.

It presupposes that there ought to be a single, unifying capacity underwriting all performances—a core that explains success and whose absence explains failure.

From a relational perspective, no such core is missing. None was ever required.

What LLMs exhibit is not partial intelligence striving toward generality, but highly localised adequacy at scale. Their competence is real, not simulated; situated, not universal; robust within particular construal ecologies and brittle outside them.

This is not a shortcoming to be repaired by adding more layers of abstraction. It is a feature of intelligence understood correctly.


5. Why “narrow” intelligence is not narrow

The term narrow intelligence suggests confinement. But what it actually names is specificity—the alignment between a system’s capacities and the structure of the situations it encounters.

Human expertise displays the same profile.

A physicist who cannot balance a company budget is not thereby less intelligent. A chess grandmaster who fails at small talk has not revealed a cognitive deficit. These are not exceptions to intelligence; they are its normal expression.

The myth lies in thinking that there must be something behind these competences that would unify them if only it were sufficiently developed.

What unifies them instead is the observer’s abstraction, not an internal mechanism.


6. What becomes possible when we let generality go

Once the demand for generality is released, several things become newly visible.

We can evaluate systems—human or artificial—by the stability and adaptability of their situated performance, rather than by imaginary benchmarks of universality.

We can design collaborations that respect complementary intelligences instead of chasing mythical replacements.

We can stop treating brittleness as failure and begin treating it as diagnostic information about the limits of a construal.

Most importantly, we can ask better questions.

Not: Is this intelligence general?

But:

  • Under what construals does this system become adequate?

  • What breaks when those construals shift?

  • How can new alignments be engineered rather than demanded?

These are not weaker questions. They are sharper ones.


7. Intelligence, properly deflated

To abandon generality is not to diminish intelligence. It is to locate it.

Intelligence does not hover above situations, waiting to be applied. It takes form with them. It does not generalise; it re‑aligns. It does not fail when it breaks; it tells us something precise about the world it was built to meet.

Seen this way, the recent successes of artificial intelligence are not the first steps toward minds without limits.

They are demonstrations—clearer than any philosophical argument—that intelligence has never needed generality to begin with.


Next: Coordination Without Control

No comments:

Post a Comment