The problem with determining agency is that it looks as if we are determining a quality in an actor in the moment. In fact, that is not what is happening. When Wittgenstein examines psychological states he notes that some of them have what he refers to as “Echte Dauer” – real duration – and some do not. Hence, it works to ask when pain starts and stops, or to speak of an instant of extreme pain, but if we were to do the same when it comes to sorrow the result would be almost comical. “Do you feel sorrow now? When did it start, when does it stop?” – Those are questions that make no sense.
Sorrow, Wittgenstein suggests, is something we discern over time in the tapestry of life, in a sequence of events. The attitude to a soul would probably be the same thing. We decide that something has agency not on the basis of a single moment, but the attitude to a soul is something that grows over time as we adopt and accept a pattern of behaviour as one that exhibits soul.
Could we ever do this for a machine? If we leave aside the rather simplistic answer that if we did it would not be a machine anymore – a valid if somewhat insipid answer – we seem to end up in a very difficult discussions about the corporeality of soul. Can we say about a disembodied system that it has a soul. Would we ever allow for the judgment that a set of algorithms should be viewed as a soul? Or do we think that only something that also has pain, can feel sorrow, fall in love – all of those things – has agency?
We could imagine a world in which agency is a package deal. We only accept the notion of agency in systems that we also believe feel pain, sorrow, joy or sadness. Such a view – let us call it the bundle hypothesis of agency – in its most expansive form says that only human beings, or what we perceive to be human beings, can have agency. If we ever wanted to build a system that exhibited agency, then, we would have to build a system that was, essentially, human.
But we could also imagine the contrary view. Someone who accepts that a system can have agency, without being able to feel pain or sorrow or any other human feelings. Such a view could be constructed in different ways, but a minimalistic one would state that we should treat such systems as can most easily be understood as having agency as having agency. This is an echo of Dennett’s test for intentionality, where the economy of description of a system determines if we call it conscious or not. Such a delineation at first feels like a cop-out, since what we are looking for seems to be the answer to the metaphysical question of if a system has agency/or is conscious, but it proves to be an interesting answer from a legal view point.
The reason is this. If we assume that economy of description can be used to delineate systems that we would like to say exhibit agency, then we have a good way of determining what systems we should hold legally liable, and what systems ultimately need to be reduced to their creators and designers in order to allocate responsibility.