Future of work – second take

When we speak about the future of work we often do this: we assume that there will be a labor market much like today, and that there will be jobs like the ones we have today, but that they will just be different jobs. It is as if we think we are moving from wanting bakers to wanting more doctors, and well, what should the bakers do? It is really hard to become a doctor!

There are other possible perspectives, however. One is to ask how both the market and the jobs will change under a new technological paradigm.
First, the markets should become much faster at detecting new tasks and the skills needed to perform them. Pattern scans across labor information markets make it possible to construct a kind of “skills radar” that will allow for us to tailor and offer new skills much like you are recommended new movies when you use Netflix. Not just “Others with your title are studying this” but also “Others on a dynamic career trajectory are looking into this”. We should be able to build skill forecasts that are a lot like weather forecasts, and less like climate forecasts.

Second, we should be able to distinguish surface skills and deep skills — by mining data about labor markets we should be able to understand what general cognitive skills underpin the surface skills that we need to deal with what changes faster. Work has layers – using Excel is a surface skill, being able to abstract a problem into a lattice of mental models is a deep skill. Today we assume a lot about these deep skills – that they have to do with problem solving and mental models, for example, but we do not know yet.

Now, if we turn to look at the jobs themselves. A few things suggest themselves.

First, jobs today are bundles of tasks – and social status and insurance and so on. These bundles are wholly put together by a single employer who will guesstimate what kinds of skills they need and then hire for those assumed skills. This is not the only possible way to bundle tasks. You could imagine using ML to ask what skills are missing across the organisation and generate new jobs on the basis of those skills; there may well be hidden jobs – unexpected bundle of skills – that would improve your organisation immeasurably!

Second, the cost of assessing and bundling tasks is polarised. It is either wholly put on the employer, or – in the gig economy – on the individual worker. This seems arbitrary. Why shouldn’t we allow for new kinds of jobs that bundle tasks from Uber, Lyft and others and adds on a set of insurances to create a job? A platform solution for jobs would essentially allow you to generate jobs out of available tasks – and perhaps even do so dynamically so that you can achieve greater stability of the flow of tasks, and hence more economic value out of the bundle than the individual tasks. This latter point is key to building in social benefits and insurance solutions into the new “job”.

Third, it will be important to follow the evolution of centaur jobs. These may just be jobs where you look for someone who is really good at working with one set of neural networks or machine learning systems of a certain kind. These will, over time, become so complex as to almost exhibit “personalities” of different kinds – and you may temperamentally or otherwise be a better fit for some of these systems than others. It is also not impossible that AI/ML systems follow the individual to a certain degree – that you offer the labor market you centaur joint labor.

Fourth, jobs may be collective and collaborative and you could hire for collective skills that today you need to combine yourselves. As coordination costs sink you can suddenly build new kinds of “macro jobs” that need to be performed by several individuals AND systems. The 1:1 relationship between an individual and a job may well dissolve.

The future of work short term lies in the new jobs we need on an existing market, long term we should look more into the changing nature of both those jobs and those markets to understand where we might want to move things. The way things are working now are also part of what was once an entirely new and novel way to think about things.

Innovation and evolution I: Speciation rates and innovation rates

As we explore analogies between innovation and evolution, there are some concepts that present intriguing questions. The idea of a speciation rate is one of these concepts and it allows us to ask questions about the pace of innovation in new ways.

Are speciation rates constant or rugged? That is: should we expect bursts of innovation at certain points? Cambrian explosions seem different from purely vertical evolution, from single cell to multi-cell etcetera.

Are speciation rates related to extinction rates? Will increases in extinction rates trigger increases in speciation? If these are entirely decoupled in a system it will have states with high extinction / low speciation that can be existentially threatening if they persist for too long. And what is extinction in innovation?

Are there measures of technical diversity alongside biological diversity and if so what it is that these measure?

Food for thought.

There are no singular facts (Questions II)

There is more to explore here, and more thoughts to test. Let’s talk more about knowledge, and take two really simples examples. We believe we know the following.

(i) The earth is round.
(ii) Gravity is 9.8 G

Our model here is one of knowledge as a set of propositions that can be justified and defended as knowledge – they can be deemed true or false, and the sum total of that body of propositions is all we know. We can add to it by adding new propositions and we can change our mind by throwing old propositions out and replacing them with new ones.

This model is incredibly strong, in the sense that it is often confused with reality (at least this is one way in which we can speak of the strength of a model – the probability p that it is mistaken for reality and not seen as a model at all), but it is just a model. A different model would say that everything you know is based on a question and the answer you provide for it — just as Plato has Socrates suggesting. We can then reconstruct the example above in an interesting way.

(i) What is the best approximate geometrical form for representing the Earth in a simple model? The Earth is round.
(ii) What is gravity on the average on planet Earth? 9.8G.

Once you explicate the question that the proposition is an answer to you suddenly also realize the limits of the answer. If we are looking for the gravity on a specific place on earth, as the top of Mount Everest, the answer may be different. If we are looking for a more exact representation of the earth with all the topological geological data exact, the round model will not suffice. Articulating the question that the proposition you say you know is an answer to opens up the proposition and your knowledge and helps you see something potentially fundamental, if it holds for closer scrutiny.

There are no isolated facts.

Facts, in this new model, are always answers to questions, and if you do not know the question you do not really understand the limits and value of a fact. This is one alternative way of addressing the notion of “a half-life of facts” as laid out by Sam Arbesman in his brilliant book on how facts cease being facts over time. The reality is that they do not cease being facts, but the questions are asking change subtly over time with new knowledge.

Note that this model is in no way a defense for relativism. It is the opposite: questions and answers provide a strong bedrock on which we can build our world, and we can definitely say that not every answers suffices to answer a question. There are good and bad answers to questions (although more rarely bad questions).

So, then, when Obama says that we need to be operating our political discussion and debates from a common baseline of facts, or when senator Moynihan argued that you are entitled to your opinions but not your own facts, we can read them under the new model as saying something different.

Obama’s statement turns into a statement about agreeing on questions and what the answers to those questions are – and frankly that may be the real challenge we face with populism: a mismatch between the questions we ask and those the populists ask.

Senator Moynihan’s point is that if we agree on the questions you don’t get to invent answers – but your opinions matter in choosing what questions we ask.

So, what does the new model suggest? It suggests the following: you don’t have knowledge. There are no facts. You have and share with society a set of questions and answers and that is where we need to begin all political dialogue. These provide a solid foundation – an even more solid foundation – for our common polis than propositions do, and a return to them may be the long term cure for things like fact resistance, fake news, propaganda, polarization and populism. But it is no quick fix.

Strong claims, but interesting ones – and ones worthy of more exploration as we start digging deeper.

Socratic epistemology, Hintikka, questions and the end of propositional logic (Questions I)

The question of what knowledge is can be understood in different ways. One way to understand it is to focus on what it means to know something. The majority view here is that knowledge is about propositions that we can examine from different perspectives. Examples would include things like:

  • The earth is round.
  • Gravity is a force.
  • Under simple conditions demand and supply meet in a market.

These propositions can then be true or false and the value we assign to them decides if they are included in our knowledge. The way we assign truth or falsity can vary. In some theories truth is about correspondence with reality, and in some it is about coherence in the set of propositions we hold to be true.

Now, admittedly this is a quick sketch of our theory of knowledge, but it suffices to ask a very basic question. Why do we believe that propositions are fundamental to knowledge? Why do we believe that they are the atoms of which knowledge is constituted?

Philosopher and historian of ideas RG Collingwood thought the explanation for this was simple: logic and grammar grew up together, as sciences, so we ended up confusing one with the other. There are, Collingwood asserts, no reasons for assuming that knowledge breaks down into propositions. There are no grounds for asserting that propositions are more basic than other alternatives. The reason we have propositional logic is just because logic is so entwined with grammar.

That leaves us with an interesting problem: what, then, is knowledge made of?

*

Socrates was clear. In Plato’s Theaetetus we find the following discussion in passing:

I mean the conversation which the soul holds with herself in considering of anything. I speak of what I scarcely understand; but the soul when thinking appears to me to be just talking—asking questions of herself and answering them, affirming and denying. And when she has arrived at a decision, either gradually or by a sudden impulse, and has at last agreed, and does not doubt, this is called her opinion. I say, then, that to form an opinion is to speak, and opinion is a word spoken,—I mean, to oneself and in silence, not aloud or to another: What think you?

This idea, that knowledge may be dialogical, that it may consist in a set of questions and answers to those questions is key to open another perspective on knowledge. It also, potentially, explains the attraction of the dialogue form for the Greeks: what better way to structure philosophical debate than in the same way knowledge is structured and produced? Why state propositions, when dialogue mimics the way we ourselves arrive at knowledge?

It is worthwhile taking a moment here. In one way this all seems so evident: of course we ask ourselves question to know! That is how we arrive at the propositions we hold true! But this is exactly where we need to pause. The reality is that the leap from questions and answers to propositions is uncalled for, and a leap that fools us into believing that questions are merely tools with which we uncover our propositions. Shovels that shovel aside the falsity from the truth. But knowledge is not like nuggets of gold buried in the earth – knowledge is the tension between answer and question in equilibrium. If you change the question, the balance of the whole thing changes as well – and your knowledge is changed.

As an aside: that is why, in belief revision, we often are interested in generating surprise in the person whose views we want to change. One way to describe surprise is as the unexpected answer to a question, that then forces a new question to be asked and the network of questions and answers is then updated to reflect a new belief – a new pair of questions and answers.

This minority view is found again in people like RG Collingwood who writes extensively about the fundamental nature of questions and it has been explicated at length by Jaako Hintikka who in his later philosophy developed what he called Socratic epistemology. In the next couple of posts we will examine what this could mean for our view of the conscious mind, and perhaps also for our view of artificial intelligence.

I think it will allow us to say that the Turing test was the wrong way around: that the questions should have been asked by the human subject and the computer to the test leader. It will also allow us to understand why human questioning is so surprisingly efficient, and why randomly generating queries is a horrible way to learn any subject. Human questions shape the field of knowledge in an interesting way, and we see this in the peculiar shape of human go games in the overall game space of go, but equally in the shape of human knowledge in chess.

*

When new models for learning are devised they are able to explore completely different parts of the problem space, parts you don’t easily reach with the kinds of questions that we have been asking. Questions have a penumbra of possible knowledge, and I suspect – although this will be good to explore further – that our ability to question is intrinsically human, and perhaps in some sense even biological. Here I would point to the excellent work of professor Joseph Jordania on questions and evolutionary theory, in his work Who Asked The First Question?.

This is an area of exploration that I have been mining for some time now with a close collaborator in professor Fredrik Stjernberg, and we are getting ready to sum up the first part of our work soon, I hope. It is not just theoretical, but suggests interesting possibilities like dialogical networks (rather than adversarial ones) and a science of possible categories of questions and ways to ask new questions, or better questions.

Weil’s paradox: intention and speech (Fake News Notes #8)

Simone Weil, in her curious book Need for Roots, notes the following on the necessity for freedom of opinion:

[…] it would be desirable to create an absolutely free reserve in the field of publication, but in such a way as for it to be understood that the works found therein did not pledge their authors in any way and contained no direct advice for readers. There it would be possible to find, set out in their full force, all the arguments in favour of bad causes. It would be an excellent and salutary thing for them to be so displayed. Anybody could there sing the praises of what he most condemns. It would be publicly recognized that the object of such works was not to define their authors’ attitudes vis-à-vis the problems of life, but to contribute, by preliminary researches, towards a complete and correct tabulation of data concerning each problem. The law would see to it that their publication did not involve any risk of whatever kind for the author.

Simone Weil, Need for Roots, p. 22

She is imagining here a sphere where anything can be said, any view expressed and explored, all data examined — and it is interesting that she mentions data, because she is aware that a part of the challenge is not just what is said, but what data is collected and shared on social problems. But she also recognizes that such a complete free space needs to be distinguished from the public sphere of persuasion and debate:

On the other hand, publications destined to influence what is called opinion, that is to say, in effect, the conduct of life, constitute acts and ought to be subjected to the same restrictions as are all acts. In other words, they should not cause unlawful harm of any kind to any human being, and above all, should never contain any denial, explicit or implicit, of the eternal obligations towards the human being, once these obligations have been solemnly recognized by law.

Simone Weil, Need for Roots, ibid.

This category – “publications destined to influence what is called opinion”, she wants to treat differently. Here she wants the full machinery of not just law, but also morals, to apply. Then she notes, wryly one thinks, that this will present some legal challenges:

The distinction between the two fields, the one which is outside action and the one which forms part of action, is impossible to express on paper in juridical terminology. But that doesn’t prevent it from being a perfectly clear one.

Simone Weil, Need For Roots, ibid.

This captures in a way the challenge that face platforms today. The inability to express this legally is acutely felt by most that study the area, and Weil’s articulation of the two competing interests – free thought and human responsibility – is clean and clear.

Now, the question is: can we find any other way to express this than in law? Are there technologies that could help us here? We could imagine several models.

One would be to develop a domain for the public sphere, for speech that intends to influence. To develop an “on the record”-mode for the flat information surfaces of the web. You could do this trivially by signing your statement in different ways, and statements could be signed by several different people as well – the ability to support a statement in a personal way is inherent in the often cited disclaimers on Twitter — where we are always told that RT does not equal endorsement. But the really interesting question is how we do endorse something, and if we can endorse statements and beliefs with different force.

Imagine a web where we could choose not just to publish, but publish irrevocably (this is for sure connected with discussions around blockchain) and publish with the strength of not just one individual, but several. Imagine the idea that we could replicate editorial accountability not just in law, but by availing those that seek it of a mode of publishing, a technological way of asserting their accountability. That would allow us to take Weil’s clear distinction and turn it into a real one.

It would require, of course, that we accept that there is a lot of “speech” – if we use that as the generic term for the first category of opinion that Weil explores – we disagree with. But we would be able to hold those that utter “opinions” – the second category, speech intended to influence and change minds – accountable.

One solution to the issue of misinformation or disagreeable information or speech is to add dimensionality to the flat information surfaces we are interacting with today.

Lessons from Lucy Kellaway

I have been following, with increasing interest, Lucy Kellaway’s second career as a teacher, and the movement she has started around a second career aimed at giving back. It makes a lot of sense. In her latest column she muses on what happens with status as you change from high-power jobs to become a teacher, and she notes that it depends on if you derive your sense of self-worth from external or internal sources. Perhaps, she argues, older people can drop the need for external validation and instead build their sense of self-worth on their own evaluation of themselves.

As I tread closer to my 50s, I find I think more and more about what it is that I want to spend the next 10-20 years doing and how I want to approach them. It is not a simple question, and I like my current work – but there is something intriguing in the notion of a second career. If health and circumstance allow I think it could be worthwhile exploring options and ideas around at least a project or some kind of work that would be different from what I have done so far.

We are all after all just experiments in living, so maybe we should embrace that more. Now, this is a metacomment, but I wanted to make a note of these thoughts to make sure that I come back to them and perhaps even hold myself accountable for thinking this through properly. Sometimes we need to write things down to seed a change. In due time, without any hurry, but rigorously and with a certain slowness.

What is your cathedral?

Time is a funny thing, and the perspectives that you can get if you shift time around are extraordinarily valuable. Take a simple example: not long ago it was common to engage in building things that would take more than one generation to finish – giant houses, cathedrals and organizations. Today we barely engage in projects that take longer than a year – in fact, that seems long to some people. A three month project, a three week sprint is preferable.

And there is some truth to this. Slicing time finely is a way to ensure that progress is made – even in very long projects. But the curious effect we are witnessing today where the slicing of time into finer and finer moments also shortens the horizons of our projects seems unfortunate.

Sir Martin Rees recently gave a talk at the Long Now Foundation where one of the themes he mused on was this. He offered a theory for why we find ourselves in this state, and the theory was this: the pace of change is such that it makes no sense to undertake very long projects. We can build cathedrals in a year if we want to, and the more powerful our technology becomes the faster we will be able to do so. The extreme case? Starting to build a cathedral in an age where you know that within a short time frame – years – you will be able to 3-d print one quickly and with low cost makes no sense — better then to wait for the technology to reach a stage where it can solve the problem for you.

If we dig here we find a fundamental observation:

(i) In a society where technology develops fast it always makes sense to examine if the time t(1) it takes to create something is greater than the time (t2) you have to wait for it to be done in much shorter time t(3).

If you want to construct something that it would take 5 years to build, but think you will be able to build it in two years if you wait one year – well, the rational thing to do is simply to wait and then do it – right?

That sentiment or feeling may be a driving factor, as sir Martin argues, behind the collapse of our horizons to short term windows. But it seems also to be something that potentially excludes us from the experience of being a part of something greater that will be finished not with you, but by generations to come.

The horizon of your work matters. It is fine to be “productive” in the sense that you finalize a lot of things, but maybe it would also be meaningful and interesting to have a Cathedral-project. Something you engage in that will live on beyond you, that will take a 100 or a 1000 years to complete if it is at all completed.

We have far too few such projects today. Arguably science is such a practice – but it is not a project. Think about it: if you were to start such a project or find one — what would it be? The Long Now Foundation has certainly found such a project in its clock, but that remains one of the few examples of “cathedral”-projects today (Sagrada Familia is also a good example – it is under way and is a proper cathedral, but we cannot all build cathedrals proper).

Books: Semiosis by Sue Burke

Just finished this excellent and surprising science fiction book. It explores several different themes – our ability to start anew on a new planet, our inherent nature, our relationship to nature and plants (!) and the growing suspicion that we are always doing someone else’s bidding. It is also beautifully written, with living characters and original ideas.

One of the themes that will stay with me is how nature always plays a dominance game, and that the darwinian struggle in some way is a ground truth that we have to understand and relate to. I have always felt somewhat uneasy with that conclusion, but I think it ultimately is because there is a mono-semiosis assumption there: all things must be interpreted in light of this fact. They must not, and Burke highlights how dominance strategies may evolve into altruistic strategies, almost in an emergent fashion. I found that striking, and important.

Overall, we should resist the notion that there are ground truths that are more true than other things, truth is a coherence space of beliefs and interpretations. Not in a postmodern way, but in a much more complicated way — this is why I often return to the wittgensteinian notion of a “form of life”. Only within that can sense be made of anything.

(Is this not also then a “ground truth”? You could make that argument I suppose, but at some point you just reach not truths but the event horizon of axiomatic necessity. We are not infinite and cannot extend reason infinitely).

So – a recommended read, and an interesting set of issues and questions.

Computational vs Biological Thinking (Man / Machine XII)

Our study of thinking has so far been characterised by a need to formalize thinking. Ever since Boole’s “Laws of Thought” the underlying assumption and metaphor for thinking has been mathematical or physical – even mechanical and always binary. Logic has been elevated to the position of pure thought, and we have even succumbed to thinking that is we deviate from logic or mathematics in our thinking, then that is a sign that our thinking is flawed and biased.

There is great value to this line of study and investigation. It allows us to test our own thinking in a model and evaluate it from the perspective of a formal model for thinking. But there is also a risk associated with this project, a risk that may become more troubling as our surrounding world becomes more complex, and it is this: that we neglect the study of biological thinking.

One way of framing this problem is to say that we have two different models of thinking: computational and biological; the computational is mathematical and follows the rules of logic – and the biological is different, it forces us to ask things about how we think that are assumed in computational thinking.

Let’s take a very simple example – the so-called conjunction fallacy. The simplest rendition of this fallacy is a case often called “Linda the bank teller”.

This is the standard case:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

https://en.wikipedia.org/wiki/Conjunction_fallacy

What computational thinking tells us is that the first proposition is always more probable than the second. It follows from the fact that the probability p is always bigger than the probability p x q if either probability is less than 1.

Yet, a surprising amount of people seem to think that it is more likely that Linda is a bank teller and active in the feminist movement. Are they wrong? Or are they just thinking in a different mode?

We could argue that they are simply chunking the world differently. The assumption underlying computational thinking is that it is possible to formalize the world into single statement propositions and that these formalizations are obvious. We thus take the second statement to be a compound statement – p AND q – and so we end up saying that it is necessarily less probable than just p. But we could challenge that and simply say that the second proposition is as elementary as the first.

What is at stake here is the idea of atomistic propositions or elementary statements. Underlying the idea of formalized propositions is the idea that there is a hierarchy of statements or propositions starting from “single fact”-propositions like “Linda is a bank teller” and moving on to more complex compound propositions like “Linda is a bank teller AND active in the feminist movement”.

Computational thinking chunks the world this way, but biological thinking does not. One way to think about it is to say that for computational thinking a proposition is a statement about the state of affairs in the world for a single variable, whereas for biological thinking it is a statement about the state of affairs for multiple related variables that are not separable nor possible to chunk into individuals.

What sets up the state space we are asked to predict is the premises, and they define the state space we are asked to predict as one that contains facts about someones activism. The premises determine the chunking of the state space, and the proposition “Linda is a bank teller and active in the feminist movement” is a singular, elementary proposition in the state space set up by the premises — not a compound statement.

What we must challenge here is the idea that chunking state spaces into elementary propositions is the same as chunking them into the smallest possible propositions. For computational thinking this holds true – but not for biological thinking.

The result of this line of arguing is intriguing: it suggests that what is commonly identified as a bias here is in fact just a bias if you assume that computational thinking is the ideal to which we are all to be held — but that in itself is a value proposition. Why is one way of chunking the state space better than another?

Another version of this argument is to say that the premises set up a proposition chunk that contains a statement about activism, so that the suppressed second part of “Linda is a bank teller” is “and NOT active in the feminist movement” and cannot be excluded. That you do not write it out does not mean that the chunk does not automatically contain a statement about that as the second chunk and the premises set that up as the natural chunking of the state space we are asked to predict.

The real failure, then, is to assume that “Linda is a bank teller” is the most probable statement – and that is not a failure of bias as such, but an interesting kind of thinking frame failure; the inability to move away from computational thinking instilled through study and application.

It is well-known that economists become more rational than others, that they are infected with mathematical rationality through study. Maybe there is this larger distortion in psychology where tests are infected with computational thinking? Are there other biases that are just examples of being unable to move from the biological frame of thinking?

Digital legal persons? Fragments (Man / Machine XI and Identity / Privacy III)

The following are notes ahead of a panel discussion this afternoon, where we will discuss the need for a legal structure for digital persons in the wake of the general discussion of artificial intelligence. 

The idea of a digital assistant seems to suggest a world in which we will see new legal actors. These actors will buy, order, negotiate and represent us in different ways, and so will have a massive impact on the emerging legal landscape. How do we approach this in the best possible way?

One strawman suggestion would be to propose a new legal construct in addition to natural and legal persons, people and companies, and introduce a new legal category for digital persons. The construct could be used to answer questions like:

  • What actions can a digital person perform on behalf of another person and how is this defined in a structured way?
  • How is the responsibility of the digital person divided of the 4 Aristotelian causes? Hardware error, software error, coder error and objective error all seem to suggest different responsible actors behind the digital person. Hardware manufacturers would be responsible for malfunction there, software producers for errors in software and coders for error that could not be seen as falling within the scope of the software companies — finally the one asking for the assistant to perform a task would have a responsibility for a clearly defined task and objective.
  • In n-person interactions between digital persons with complex failures, who is then responsible?
  • Is there a preference for human / digital person responsibility?
  • What legal rights and legal capacities does a digital person have? This one may seem still in the realm of science fiction – but remember that with legal rights we can also mean the right to incur a debt on behalf of a non-identified actor, and we may well see digital persons that perform institutional tasks rather than just representative tasks.

There are multiple other questions here as well, that would need to be examined more closely. Now, there are also questions that can be raised about this idea, and that seem to complicate things somewhat. Here are a few of the questions that occur to me.

Dan Dennett has pointed out that one challenge with artificial intelligence is that we are building systems that have amazing competence without the corresponding comprehension. Is comprehension not a prerequisite for legal capacity and legal rights? Perhaps not, but we would do well to examine the nature of legal persons – of companies – when we dig deeper into the need for digital persons in law.

What is a company? It is a legal entity defined by a founding document of some kind with a set of responsible natural persons identified clearly under the charter and operations of that company. In a sense that makes it a piece of software. A legal person, as identified today, is at least an information processing system with human elements. It has no comprehension as such (in fact legal persons are reminiscent of Searle’s Chinese room in a sense, they can act intelligently without us being able to locate the intelligence in the organization in any specific place). So – maybe we could say that the law already recognizes algorithmic persons, because that is exactly what a legal entity like a company is.

So, you can have legal rights and legal capacity based on a system of significant competence but without individual comprehension. The comprehension in the company is located in the specific institutions where the responsibility is located, e.g. the board. The company is held responsible for its actions through holding the board responsible, and the board is made up of natural persons – so maybe we could say that legal persons have derived legal rights, responsibilities and capacity?

Perhaps, but it is not crystal clear. In the US there is an evolving notion about corporate personhood that actually situates the rights and responsibilities within the corporation as such, and affords it constitutional protection. At the center of this debate the last few years have been the issue of campaign finance, and Citizens United.

At this point it seems we could suggest that the easiest way to deal with the issue of digital persons would be to simply incorporate digital assistants and AIs as they take on more and more complex tasks. Doing this would also allow for existing insurance schemes to adapt and develop around digital persons, and would resolve many issues by “borrowing” from the received case law.

Questions around free expression for digital assistants would be resolved by reference to Citizen United, for example, in the US. Now, let’s be clear: this would be tricky. In fact, it would mean, arguably, that incorporated bot networks had free speech rights, something that flies in the face of how we have viewed election integrity and fake news. But incorporation would also place duties on these digital persons in the shape of economic reporting, transparency and the possibility of legal dissolution if there was illegal behavior on behalf of the digital persons in question. Turning digital persons into property would also allow for a market in experienced neural networks in a way that could be intriguing to examine more closely.

An interesting task, here, would also be to examine how rights would apply – such as privacy – to these new corporations. Privacy, purely from an instrumental perspective here, would be important for a digital person to be able to conceal certain facts and patterns about itself to retain the ability to act freely and negotiate. Is there, then, such a thing as digital privacy that is distinct from natural privacy?

This is, perhaps then, a track worth exploring more – knowing full well the complexities it seems to imply (not least the proliferation of legal persons and what that would do with existing institutional frameworks).

Another, separate, track of investigation would be to look at a different concept – digital agency. Here we would not focus on the assistants as “persons”, but we would instead admit that this analysis only flows from the analogy and not from any closer analysis. When we speak of artificial intelligence as a separate thing, as some entity, we are lazily following along with a series of unchallenged assumptions. The more realistic scenarios are all about augmented intelligence and so about an extended penumbra of digital agency on top of our own human agency, and the real question then becomes one about how we integrate that extended agency into our analysis of contract law, tort law and criminal law.

There is – we would say – no such thing as a separate digital person, but just a person with augmented agency, and the better analysis would be to examine how that can be represented well in legal analysis. This is no small task, however, since a more and more networked agency dissolves the idea of legal personhood to a large degree, in a way that is philosophically interesting.

Much of the legal system has required the identification of a responsible individual. In the case of failure to do so, noone has been held responsible, even if it is quite possible to say that there is a class of people or a network that carries distributed responsibility. We have, for classical liberal reasons, been hesitant to accept any criminal judgment that is based on a joint responsibility in cases where the defendants identify each-other as the real criminal. There are many different philosophical questions that need to be examined here – starting with the difference between augmented agency, digital agency, individual agency, networked agency, collective agency and similar concepts. Other issues would revolve around whether we believe that we can pulverize legal rights and responsibility and say that someone is 0.5451 responsible for a bad economic decision? A distribution of responsibility that equates to the probability that you should have caught it multiplied by the cost for you to do so would introduce an ultra-rational approach to legal responsibility that would, perhaps, be more fair from an economic standpoint, but more questionable in criminal cases.

And where an entire network has failed a young person subsequently caught for a crime – could one sentence all of the network? Are there cases where we all are somewhat responsible because of actions or inactions? The dissolution of agency asks an order of magnitude more complex questions than simply focusing on the introduction of a new person, but it is still an intriguing avenue to explore.

As the law of artificial intelligence evolves, it is also interesting to take into account its endpoint. If we assume that we will reach – one day artificial general intelligence, then what we will have done is most likely to have created something towards which we have what Wittgenstein called an attitude towards a soul. At that point, any such new entities likely are, in a legal sense, human if we interact with them as human. And then no legal change at all is needed. So what do we say about the intermediate stages and steps and the need for a legal evolution that ultimately – we all recognize – will just bring us back to where we are today?