Saturday, 14 March 2026

Ethics in the Age of Relational Machines: 6 — Artificial Agency

The emergence of artificial systems in decision-making, symbolic production, and collective cognition inevitably raises a provocative question:

Do artificial systems possess agency?

The question is often framed in dramatic terms. Popular discussions ask whether machines will eventually “become agents,” as though agency were a property that might suddenly appear once systems become sufficiently sophisticated.

But this framing may already be misleading.

From a relational perspective, agency is not a mysterious substance that resides inside an entity. It is a property of systems capable of organising action within relational environments.

Understanding artificial agency therefore requires examining how action is structured within the systems that now surround us.


1. What Is Agency?

In ordinary language, an agent is something that can act.

Philosophically, the concept is usually tied to several capacities:

  • the ability to initiate actions,

  • the capacity to pursue goals,

  • and the ability to respond to changing circumstances.

Humans clearly possess these capacities.

But human agency itself is not purely individual. It depends on:

  • knowledge,

  • tools,

  • institutions,

  • and symbolic systems.

Even human action is already embedded within relational structures.

Artificial systems complicate this picture further.


2. Artificial Systems and Goal-Oriented Behaviour

Many artificial systems display forms of goal-directed behaviour.

For example, they can:

  • optimise for particular outcomes,

  • adapt to changing inputs,

  • and produce outputs that appear purposive.

But these goals are not self-generated.

They are specified through:

  • training objectives,

  • reward structures,

  • and system architecture.

Artificial systems therefore operate within goal frameworks established by human designers and institutions.

Their behaviour may appear agent-like, but its structure originates elsewhere.


3. The Architecture of Action

From a relational standpoint, it is helpful to shift the focus away from the internal properties of individual components.

Instead, we can examine the architecture through which action occurs.

Consider a large socio-technical system such as an automated logistics network.

Decisions within such a system emerge from the interaction of:

  • algorithms,

  • human operators,

  • organisational procedures,

  • and physical infrastructures.

Action does not originate from any single element.

It emerges from the relational configuration of the system.

Artificial systems are participants in these configurations.

But they are not the sole source of action.


4. Agency as Systemic

This suggests a different way of thinking about agency.

Instead of asking whether machines possess agency in isolation, we might ask:

Which systems organise action?

Within large socio-technical environments, agency may be better understood as systemic.

It emerges from the coordination of multiple elements:

  • human intention,

  • technological architecture,

  • institutional frameworks,

  • and symbolic systems.

Artificial systems contribute to these configurations by shaping how decisions are generated and implemented.

But the resulting agency belongs to the system as a whole.


5. Artificial Systems as Amplifiers

Seen in this light, artificial systems function less as independent agents and more as amplifiers of human and institutional agency.

They extend the reach of decision processes.

They accelerate symbolic processing.

They allow organisations to operate at scales that would otherwise be impossible.

But amplification changes the nature of the system.

When agency is amplified through technological architectures, its consequences can become:

  • faster,

  • more widespread,

  • and more difficult to reverse.

Amplification therefore introduces ethical stakes even when the underlying agency remains relationally distributed.


6. The Temptation of Anthropomorphism

Public discussions often attribute agency to artificial systems because their outputs resemble human behaviour.

Language models generate sentences.

Decision systems produce recommendations.

Autonomous vehicles navigate environments.

These behaviours invite anthropomorphic interpretations.

But resemblance should not be confused with equivalence.

Artificial systems participate in action without necessarily possessing the forms of understanding, intention, or accountability that characterise human agency.

Recognising this distinction is essential for clear ethical analysis.


7. Ethics Without Illusions

The danger in attributing full agency to machines is that responsibility may be displaced.

If a system is treated as an autonomous actor, human participants may distance themselves from the outcomes it produces.

But artificial systems operate within relational architectures designed, deployed, and governed by human institutions.

Ethical responsibility therefore remains anchored within those relational structures.

Understanding the architecture of agency allows us to avoid two opposite mistakes:

  • imagining that machines are mere passive tools, or

  • imagining that they are fully autonomous agents.

The reality lies between these extremes.

Artificial systems are participants within relational systems of action.


Transition

If artificial systems participate in relational systems of action, the final question of this series becomes unavoidable.

What, then, would count as artificial consciousness?

Must a system possess consciousness to participate ethically in social environments?

Or does ethical responsibility arise long before consciousness enters the picture?

In the final post of this series, we will address this question directly.

No comments:

Post a Comment