Let’s assume that we have designed a good way of determining agency. How would we, then, determine the consequences of liability where we have established that there is agency? Here we encounter an interesting observation. It feels wholly unsatisfactory to assign agency and thus liability to a system that cannot recognize that it is being held responsible. Think about it: assume that we say that a system killed a man by, say, scheduling the working of a machine in the wrong way, and that we have determined that it did so to kill the man – because it found him inefficient, say. Would we then say that the system should be held responsible for murder? If we have established that it has agency, that it acted autonomously, then the answer seems to be yes.
But does shutting down the system really feel like a good response to a murder? Is it really the equivalent of the death penalty? This question reveals something interesting about agency. It seems that one way of thinking about agency would be to say that we assign agency only to that we think can feel guilt, remorse or a sense of self-preservation. Or those systems that we think can feel regret.
The relationship between regret and agency are interesting, and have to do with our innate will to punish. We know that people are neurologically hard-wired to feel pleasure when punishing social wrongdoers. This seems to suggest that we view punishment as a recognition of agency. Is it then first when we feel satsified with punishing an actor that we can safely say that our attitude to that actors is the attitude towards a soul? That seems an awful thought, but it is probably not far from the truth.
When we believe a punishment is felt, we also believe there is agency – someone might say. And we may say that it is only when we have an attitude towards a soul that we also punish someone.