Sunday, 12 April 2026

Artificial Legibility — 3 Where Does the System End?

A response is produced.

It is attributed to “the model.”

This attribution appears straightforward.

There is a system, and it generates output.


But this simplicity does not survive closer inspection.

Because once generation is understood as constraint-based continuation, the question of where the system begins and ends becomes unstable.


What is usually referred to as “the model” is only one component in a larger configuration.

It includes:

  • a trained parameter space

  • a history of data that shaped that space

  • an input sequence that constrains the current continuation

  • an interface that mediates interaction

  • a user who provides and updates constraints

None of these are external in a simple sense.

All of them participate in shaping what can be generated.


This makes the notion of a bounded system difficult to maintain.

Because no single element fully determines the output.


The model parameters encode statistical regularities from training data.

But those regularities are not self-activating.

They require input to become operative.


The input does not function independently either.

It constrains the continuation space only in relation to the model’s learned structure.


The interface further shapes the form of interaction:

  • how prompts are entered

  • how outputs are segmented

  • how continuation is initiated or terminated

These are not neutral.

They affect how constraints are introduced and sustained.


And the user is not external to this process.

The user supplies inputs, revises them, interprets outputs, and feeds those interpretations back into subsequent prompts.


What appears, then, as a single system generating output is in fact a distributed configuration of constraint contributions.


This distribution has a specific structure.

It is not a collection of independent parts.

It is a coupled system of constraint propagation.


Each component contributes to the shaping of continuation:

  • training data defines the statistical landscape

  • model architecture defines how that landscape is navigated

  • prompts define local constraint conditions

  • interface defines interaction boundaries

  • user behaviour defines iterative adjustment of constraints


No single component contains the system.

The system is the ongoing coordination of these constraint regimes.


This has a direct consequence for how system boundaries are understood.

Boundaries are not given in advance.

They are inferred from where constraint coherence appears to stabilise.


If the output is attributed solely to “the model,” the boundary is drawn narrowly.

If training data is included, the boundary expands.

If user interaction is included, the boundary expands further.


None of these boundaries are incorrect.

But none are primary.


Each is a way of stabilising a distributed process into a manageable unit.


This returns us to a more general point.

Systemhood is not a property of an object.

It is a way of treating a region of coordinated constraint propagation as if it were bounded.


In artificial systems, this coordination spans multiple layers that do not share a single location.


The model does not contain its training data in any direct sense.

The user does not control the model’s internal structure.

The interface does not determine the statistical landscape.


And yet, all of these contribute to what is produced.


This makes it difficult to say where the system ends.

Not because the system is infinite.

But because its coherence does not align with a single boundary.


Instead, coherence appears where constraint contributions align sufficiently to produce stable continuation.

Where this alignment weakens, coherence breaks down.


The “system,” then, is not a container.

It is a region of sustained alignment across distributed constraints.


This has implications for how outputs are attributed.

When a response is treated as the product of “the model,” a boundary is being drawn.

That boundary excludes:

  • the role of training data

  • the role of prompts

  • the role of interaction dynamics


This exclusion simplifies attribution.

But it obscures how coherence is actually produced.


A more precise account would treat the output as arising from a distributed system in which no single component is sufficient.


This does not mean that all components contribute equally.

It means that contribution is relational, not contained.


At this point, the earlier distinction between generation and interpretation reappears in a new form.

Generation is distributed across multiple constraint regimes.

Interpretation stabilises that distribution into a bounded system for the purpose of attribution.


The system, as it is usually named, is the result of this stabilisation.


Which leads to a final adjustment.

To ask “where does the system end?” is already to assume that there is a place where it does.


A more accurate question is:

under what conditions does distributed constraint propagation stabilise sufficiently to be treated as a system at all?


In artificial systems, this stabilisation is continuous but never absolute.

Boundaries are drawn, not found.

And what they enclose is not a thing, but a temporary coherence across interacting constraints.


The system does not end in a single place.

It appears where continuation holds together long enough for it to be named.

No comments:

Post a Comment