Gossiping about AI (Man / Machine XII)

There are plenty of studies of gossip as a social phenomenon, and there are computer science models of gossiping that allow for information distribution in system. There are even gossip learning systems that compete with or constitute alternatives to federated learning models. But here is a question I have not found any serious discussion about in the literature: what would it mean to gossip about an artificial intelligence? I tend to think that this would constitute a really interesting social turing test – and we could state it thus: ¨

(i) A system is only socially intelligent and relevant if it is the object of gossip or can be come the object of gossip.

This would mean that it is first when we confide in each-other what we heard about an AI that it has some kind of social existence. Intelligence, by the way, is probably the wrong word here — but the point remains. To be gossiped about is to be social in a very human way. We do not gossip about dogs or birds, we do not gossip about buildings or machines. We gossip about other subjects.

*

This connects with a wider discussion about the social nature of intelligence, and how the model of intelligence we have is somewhat simplified. We tend to talk about intelligence as individual, but the reality is that it is a network concept, your intelligence is a function of the networks you exist in and are a part of. Not only, but partly.

I feel strongly, for example, that I am more intelligent in some sense because I have the privilege to work with outstanding individuals, but I also know that they in turn get to shine even more because they work with other outstanding individuals. The group augments the individual’s talents and shapes them.

That would be another factor to take into account if we are designing social intelligence Turing tests: does the subject of the test become more or less intelligent with others? Kasparov has suggested that man and a machine always beats machine – but that is largely because of the ability of man to adapt and integrate into a system. Would machine and machine beat machine? Probably not — in fact, you could even imagine the overall result there as negative! This quality – additive intelligence – is interesting.

*

I have written elsewhere that we get stuck in language when we speak of artificial intelligence. That it would be better to speak of sophisticity or something like that – a new word that describes certain cognitive skills bundled in different ways. I do believe that would allow us a debate that is not so hopelessly antropocentric. We are collectively sometimes egomaniacs, occupied only with the question of how something relates to us.

Thinking about what bundles of cognitive skills I would include, then, I think the social additive quality is important, and maybe it is a cognitive skill to be able to be gossiped about, in some sense. Not a skill, perhaps, but a quality. There is something there, I think. More to explore, later.

Future of work – second take

When we speak about the future of work we often do this: we assume that there will be a labor market much like today, and that there will be jobs like the ones we have today, but that they will just be different jobs. It is as if we think we are moving from wanting bakers to wanting more doctors, and well, what should the bakers do? It is really hard to become a doctor!

There are other possible perspectives, however. One is to ask how both the market and the jobs will change under a new technological paradigm.
First, the markets should become much faster at detecting new tasks and the skills needed to perform them. Pattern scans across labor information markets make it possible to construct a kind of “skills radar” that will allow for us to tailor and offer new skills much like you are recommended new movies when you use Netflix. Not just “Others with your title are studying this” but also “Others on a dynamic career trajectory are looking into this”. We should be able to build skill forecasts that are a lot like weather forecasts, and less like climate forecasts.

Second, we should be able to distinguish surface skills and deep skills — by mining data about labor markets we should be able to understand what general cognitive skills underpin the surface skills that we need to deal with what changes faster. Work has layers – using Excel is a surface skill, being able to abstract a problem into a lattice of mental models is a deep skill. Today we assume a lot about these deep skills – that they have to do with problem solving and mental models, for example, but we do not know yet.

Now, if we turn to look at the jobs themselves. A few things suggest themselves.

First, jobs today are bundles of tasks – and social status and insurance and so on. These bundles are wholly put together by a single employer who will guesstimate what kinds of skills they need and then hire for those assumed skills. This is not the only possible way to bundle tasks. You could imagine using ML to ask what skills are missing across the organisation and generate new jobs on the basis of those skills; there may well be hidden jobs – unexpected bundle of skills – that would improve your organisation immeasurably!

Second, the cost of assessing and bundling tasks is polarised. It is either wholly put on the employer, or – in the gig economy – on the individual worker. This seems arbitrary. Why shouldn’t we allow for new kinds of jobs that bundle tasks from Uber, Lyft and others and adds on a set of insurances to create a job? A platform solution for jobs would essentially allow you to generate jobs out of available tasks – and perhaps even do so dynamically so that you can achieve greater stability of the flow of tasks, and hence more economic value out of the bundle than the individual tasks. This latter point is key to building in social benefits and insurance solutions into the new “job”.

Third, it will be important to follow the evolution of centaur jobs. These may just be jobs where you look for someone who is really good at working with one set of neural networks or machine learning systems of a certain kind. These will, over time, become so complex as to almost exhibit “personalities” of different kinds – and you may temperamentally or otherwise be a better fit for some of these systems than others. It is also not impossible that AI/ML systems follow the individual to a certain degree – that you offer the labor market you centaur joint labor.

Fourth, jobs may be collective and collaborative and you could hire for collective skills that today you need to combine yourselves. As coordination costs sink you can suddenly build new kinds of “macro jobs” that need to be performed by several individuals AND systems. The 1:1 relationship between an individual and a job may well dissolve.

The future of work short term lies in the new jobs we need on an existing market, long term we should look more into the changing nature of both those jobs and those markets to understand where we might want to move things. The way things are working now are also part of what was once an entirely new and novel way to think about things.

Digital legal persons? Fragments (Man / Machine XI and Identity / Privacy III)

The following are notes ahead of a panel discussion this afternoon, where we will discuss the need for a legal structure for digital persons in the wake of the general discussion of artificial intelligence. 

The idea of a digital assistant seems to suggest a world in which we will see new legal actors. These actors will buy, order, negotiate and represent us in different ways, and so will have a massive impact on the emerging legal landscape. How do we approach this in the best possible way?

One strawman suggestion would be to propose a new legal construct in addition to natural and legal persons, people and companies, and introduce a new legal category for digital persons. The construct could be used to answer questions like:

  • What actions can a digital person perform on behalf of another person and how is this defined in a structured way?
  • How is the responsibility of the digital person divided of the 4 Aristotelian causes? Hardware error, software error, coder error and objective error all seem to suggest different responsible actors behind the digital person. Hardware manufacturers would be responsible for malfunction there, software producers for errors in software and coders for error that could not be seen as falling within the scope of the software companies — finally the one asking for the assistant to perform a task would have a responsibility for a clearly defined task and objective.
  • In n-person interactions between digital persons with complex failures, who is then responsible?
  • Is there a preference for human / digital person responsibility?
  • What legal rights and legal capacities does a digital person have? This one may seem still in the realm of science fiction – but remember that with legal rights we can also mean the right to incur a debt on behalf of a non-identified actor, and we may well see digital persons that perform institutional tasks rather than just representative tasks.

There are multiple other questions here as well, that would need to be examined more closely. Now, there are also questions that can be raised about this idea, and that seem to complicate things somewhat. Here are a few of the questions that occur to me.

Dan Dennett has pointed out that one challenge with artificial intelligence is that we are building systems that have amazing competence without the corresponding comprehension. Is comprehension not a prerequisite for legal capacity and legal rights? Perhaps not, but we would do well to examine the nature of legal persons – of companies – when we dig deeper into the need for digital persons in law.

What is a company? It is a legal entity defined by a founding document of some kind with a set of responsible natural persons identified clearly under the charter and operations of that company. In a sense that makes it a piece of software. A legal person, as identified today, is at least an information processing system with human elements. It has no comprehension as such (in fact legal persons are reminiscent of Searle’s Chinese room in a sense, they can act intelligently without us being able to locate the intelligence in the organization in any specific place). So – maybe we could say that the law already recognizes algorithmic persons, because that is exactly what a legal entity like a company is.

So, you can have legal rights and legal capacity based on a system of significant competence but without individual comprehension. The comprehension in the company is located in the specific institutions where the responsibility is located, e.g. the board. The company is held responsible for its actions through holding the board responsible, and the board is made up of natural persons – so maybe we could say that legal persons have derived legal rights, responsibilities and capacity?

Perhaps, but it is not crystal clear. In the US there is an evolving notion about corporate personhood that actually situates the rights and responsibilities within the corporation as such, and affords it constitutional protection. At the center of this debate the last few years have been the issue of campaign finance, and Citizens United.

At this point it seems we could suggest that the easiest way to deal with the issue of digital persons would be to simply incorporate digital assistants and AIs as they take on more and more complex tasks. Doing this would also allow for existing insurance schemes to adapt and develop around digital persons, and would resolve many issues by “borrowing” from the received case law.

Questions around free expression for digital assistants would be resolved by reference to Citizen United, for example, in the US. Now, let’s be clear: this would be tricky. In fact, it would mean, arguably, that incorporated bot networks had free speech rights, something that flies in the face of how we have viewed election integrity and fake news. But incorporation would also place duties on these digital persons in the shape of economic reporting, transparency and the possibility of legal dissolution if there was illegal behavior on behalf of the digital persons in question. Turning digital persons into property would also allow for a market in experienced neural networks in a way that could be intriguing to examine more closely.

An interesting task, here, would also be to examine how rights would apply – such as privacy – to these new corporations. Privacy, purely from an instrumental perspective here, would be important for a digital person to be able to conceal certain facts and patterns about itself to retain the ability to act freely and negotiate. Is there, then, such a thing as digital privacy that is distinct from natural privacy?

This is, perhaps then, a track worth exploring more – knowing full well the complexities it seems to imply (not least the proliferation of legal persons and what that would do with existing institutional frameworks).

Another, separate, track of investigation would be to look at a different concept – digital agency. Here we would not focus on the assistants as “persons”, but we would instead admit that this analysis only flows from the analogy and not from any closer analysis. When we speak of artificial intelligence as a separate thing, as some entity, we are lazily following along with a series of unchallenged assumptions. The more realistic scenarios are all about augmented intelligence and so about an extended penumbra of digital agency on top of our own human agency, and the real question then becomes one about how we integrate that extended agency into our analysis of contract law, tort law and criminal law.

There is – we would say – no such thing as a separate digital person, but just a person with augmented agency, and the better analysis would be to examine how that can be represented well in legal analysis. This is no small task, however, since a more and more networked agency dissolves the idea of legal personhood to a large degree, in a way that is philosophically interesting.

Much of the legal system has required the identification of a responsible individual. In the case of failure to do so, noone has been held responsible, even if it is quite possible to say that there is a class of people or a network that carries distributed responsibility. We have, for classical liberal reasons, been hesitant to accept any criminal judgment that is based on a joint responsibility in cases where the defendants identify each-other as the real criminal. There are many different philosophical questions that need to be examined here – starting with the difference between augmented agency, digital agency, individual agency, networked agency, collective agency and similar concepts. Other issues would revolve around whether we believe that we can pulverize legal rights and responsibility and say that someone is 0.5451 responsible for a bad economic decision? A distribution of responsibility that equates to the probability that you should have caught it multiplied by the cost for you to do so would introduce an ultra-rational approach to legal responsibility that would, perhaps, be more fair from an economic standpoint, but more questionable in criminal cases.

And where an entire network has failed a young person subsequently caught for a crime – could one sentence all of the network? Are there cases where we all are somewhat responsible because of actions or inactions? The dissolution of agency asks an order of magnitude more complex questions than simply focusing on the introduction of a new person, but it is still an intriguing avenue to explore.

As the law of artificial intelligence evolves, it is also interesting to take into account its endpoint. If we assume that we will reach – one day artificial general intelligence, then what we will have done is most likely to have created something towards which we have what Wittgenstein called an attitude towards a soul. At that point, any such new entities likely are, in a legal sense, human if we interact with them as human. And then no legal change at all is needed. So what do we say about the intermediate stages and steps and the need for a legal evolution that ultimately – we all recognize – will just bring us back to where we are today?

 

The free will to make slightly worse choices ( Man / Machine XI)

In his chapter on intelectronics, his word for what most closely resembles artificial intelligence, Stanislaw Lem suggests an insidious way in which the machine could take over. It would not be, he says, because it wants to terrorize us, but more likely because it will try to be helpful. Lem develops the idea of the control problem, and the optimization problem, decades before they are then re-discovered by Nick Bostrom and others, and he runs through the many different ways in which a benevolent machine may just manipulate us in order to get better results for us.

This, however, is not the worst scenario. At the very end of the chapter, Lem suggests something much more interesting, and – frankly – hilarious. He says that another, more credible, version of the machines taking over would look like this: we develop machines that are simply better at making decisions for us than we would be making these very same decisions ourselves.

A simple example: your personal assistant can help you book travel, and knowing your preferences, being able to weight them against those of the rest of the family, the assistant has always booked top-notch vacations for you. Now, you crave your personal freedom so you book it yourself, and naturally – since you lack the combinatorial intelligence of an AI – the result is worse. You did not enjoy it as much, and the restaurants were not as spot on as they usually are. The book stores you found were closed, and not very interesting and out of the three museums you went to, only one really captured all of the family’s interests.

But you made your own decision. You exercised your free will. But what happens, says Lem, when that free will is nothing but the free will to make decisions that are always slightly worse than the ones the machine would have made for you? When your autonomy always comes at the cost of less pleasure? That – surmises Lem – would be a tyranny as insidious as any control environment or Orwellian surveillance state.

A truly intriguing thought, is it not?

*

As we examine it closer we may want to raise objections: we could say that making our own decisions, exercising our autonomy, in fact always means that we enjoy ourselves a little bit more, and that there is utility in the choice itself – so we will never end up with a benevolent dictator machine. But does that ring true? Is it not rather the case that a lot of people feel that there is real utility in not having to choose at all, as long as they feel that could have made a choice? Have we not seen sociological studies that argue that we live in a society that imposes so many choices on us that we all feel stressed about the plethora of alternatives for us?

What if the machine could let you know what breakfast cereal out of the many hundreds in the shelf in the supermarket will taste best for you, and at the same time be healthy? Would it not be great not to have to choose?

Or is there value in self-sabotage that we are neglecting to take into account here? That thought – that there is value in making worse choices, not because we exercise our will, but because we do not like ourselves, and are happy to be unhappy – well, it seems a little stretched. For sure, there are people like this – but as a general rule I don’t find that argument credible.

Well, we could say, our preferences change so much that it is impossible for a machine to know what I will want tomorrow – so the risk is purely fictional. I am not so sure that is true. I would suggest we are much more patterned than we like to believe. We live, as Dr Ford in Westworld notes, in our little loops – just like his hosts. We are probably much more predictable than we would like to admit, for a large set – although not all – cases. It is unlikely, admittedly, that a machine would be better at making life choices around love, work and career – these are choices that are hard to establish a pattern in (in fact, we arguably only establish those patterns in retrospect when we tell ourselves autobiographical stories about our lives).

There is also the possibility that the misses would be so unpleasant that the hits would not matter. This is an interesting argument, and I think there is something to it. If you knew that your favorite candy tasted fantastically 9 out 10 cases and tasted garbage ever tenth, without any chance of predicting when that would be, would you still eat it? Where would you draw a line? Every second piece of candy? 99 out of a 100? There is such a thing as disappointment cost, and if the machine is righto in the money in 999 out of a 1000 cases — is the miss such that we would stop using it, or prefer our own slightly worse choices? In the end – probably not.

*

The free will to make slightly worse choices. That is one way in which our definition of humanity could change fundamentally in a society with thinking machines.

Stanislaw Lem, Herbert Simon and artificial intelligence as broad social technology project (Man / Machine X)

Why do we develop artificial intelligence? Is it merely because of an almost faustian curiosity? Is it because of an innate megalomania that suggests that we could, if we want to, become gods? The debate today is ripe with examples of risks and dangers, but the argument for the development of this technology is curiously weak.

Some argue that it will help us with medicine, and improve diagnostics, others dutifully remind us of the productivity gains that could be unleashed by deploying these technologies in the right way and some even suggest that there is a defensive aspect to the development of AI — if we do not develop it, it will lead to an international imbalance where the nations that have AI will be akin to those nations that have nuclear capabilities: technologically superior and capable of dictating the fates of those countries that lag behind (some of this language is emerging in the on-going geo-politicization of artificial intelligence between The US, Europe and China).

Things were different in the early days of AI, back in the 1960s, and the idea of artificial intelligence was actually more connected then with the idea of a social and technical project, a project that was a distinct response to a set of challenges that seemed increasingly serious to writers of that age. Two very different examples support this observation: Stanislaw Lem and Herbert Simon.

Simon, in attacking the challenge of information overload – or information wealth as he prefers to call it – suggests that the only way we will be able to deal with the complexity and rich information produced in the information age will be to invest in artificial intelligence. The purpose of that, to him, is to help us learn faster – and if we take into account Simon’s definition of learning as very close to classical darwinian adaptation, we realize that for him the development of artificial intelligence was a way to ensure that we can continue to adapt to an information rich environment.

Simon does not call this out, but it is easy to read between the lines and see what the alternative is: a growing inability to learn, to adapt that generates increasing costs and vulnerabilities, the emergence of a truly brittle society that collapses under its own complexity.

Stanislaw Lem, the Polish science fiction author, suggests a very similar scenario (in his famously unread Summa Technologiae), but his is more general. We are, he argues, running out of scientists and we need to ensure that we can continue to drive scientific progress, since the alternative is not stability, but stagnation. He views the machine of progress as a homeostat that needs to be kept in constant operation in order to produce, in 30 year increments, a doubling of scientific insights and discoveries. Even if we, he argues, force people to train as scientists we will not be able to grow fast enough to respond to the need for continued scientific progress.

Both Lem and Simon suggest the same thing: we are facing a shortage of cognition, and we need to develop artificial cognition or stagnate as a society.

*

The idea of a scarcity or shortage of cognition as a driver of artificial intelligence is much more fundamental than any of the ideas we quickly reviewed in the beginning. What we find here is an existential threat against mankind, and a need to build a technological response. The lines of thought, the structure of the argument, here almost remind us of the environmental debate: we are exhausting a natural resource and we need innovation to help us continue to develop.

One could imagine an alternative: if we say that we are running out of cognition, we could argue that we need to ensure the analogue of energy efficiency. We need cognition efficiency. That view is not completely insane, and in a certain way that is what we are developing through stories, theories and methods in education. The connection with energy is also quite direct, since artificial intelligence will consume energy as it develops. A lot of research is currently being directed into the question of the energy consumption of computation. There is a boundary condition here: a society that builds out its cognition through technology does so at the cost of energy at some level, and the cognition / energy yield will become absolutely essential. There is also a more philosophical point around all of this, and that is the question of renewable cognition, sustainable cognition.

Cognition cost is a central element in understanding Simon’s and Lem’s challenge.

*

But is it true? Are we running out of cognition? How would you measure that? And is the answer really a technological one? What about educating and discovering the talent of the billions of people that today live in poverty, or without any chance of an education to grow their cognitive abilities? If you have a 100 dollars – what buys you the most cognition (all other moral issues aside): investing in developmental aid or in artificial intelligence?

*

Broad social technological projects are usually motivated by competition, not by environmental challenges. One reason – probably not the dominating one, but perhaps a contributing factor nonetheless – that climate change seems to inspire so little action in spite of the threat is this: there is no competition at all. The world is at stake, and so nothing is at stake relative to one another. The conclusion usually drawn from that observation is that we should all come together. What ends up happening is that we get weak engagement from all.

Strong social engagement in technological development – what are the examples? The race for nuclear weapons, the race for the moon. In one sense the early conception of the project to build artificial intelligence was as a global, non-competitive project. Has it slowly changed to become an analogue of the space race? The way China is now approaching the issue is to some reminiscent of the Manhattan project style. [1]

*

If we follow that analogy for a bit further — what comes next? What is the equivalent of the moonlanding for artificial intelligence? Surely not the Turing test – it has been passed multiple times in multiple versions, and as such has lost a lot of its salience as a test for progress. What would then be the alternative? Is there a new test?

One quickly realizes that it probably is not the emergence of an artificial general intelligence, since that seems to be decades away, and a questionable project at best. So what would be a moon landing moment? Curing cancer (too broad, many kinds of cancer)? Eliminating crime (a scary target for many reasons)? Sustained economic growth powered by both capital investment strategies and deployment of AI in industry?

An aside: far too often we talk about moonshots, without talking about what the equivalent of the moonlanding would be. It is one thing to shoot for the moon, another to walk on it. Defined outcomes matter.

*

Summing up: we could argue that artificial intelligence was conceived of, early on, as a broad social project to respond to a shortage of cognition. It then lost that narrative, and today it is getting more and more enmeshed in a geopolitical, competitive narrative. That will likely increase the speed with which a narrow set of applications develop, but there is still no single moonlanding moment associated with the field that stands out as the object of competition between the US, EU and China. But maybe we should expect the construction of such a moment in medicine, military affairs or economics? So far, admittedly, it has been games that have been the defining moments – tic-tac-toe, chess, go – but what is next? And if there is no single such moment, what does that mean for the social narrative, speed of development and evolution of the field?

 

[1] https://www.technologyreview.com/s/609038/chinas-ai-awakening/

On not knowing (Man / Machine III)

Humans are not great at answering questions with “I don’t know”. They often seek to provide answers even where they know that they do not know. Yet still, one of the hallmarks of careful thinking is to acknowledge when we do not know something – and when we cannot say anything meaningful about an issue. This socratic wisdom – knowing that we do not know – becomes a key challenge as we design systems with artificial intelligence components in them.

One way to deal with this is to say that it is actually easier with machines. They can give a numeric statement of their confidence in a clustering of data, for example, so why is this an issue at all? I think this argument misses something important about what it is that we are doing when we say that we do not know. We are not simply stating that a certain question has no answers above a confidence level, we can actually be saying several different things at once.

We can be saying…
…that we believe that the question is wrong, or that the concepts in the question are ill-thought through.
…that we have no data or too little data to form a conclusion, but that we believe more data will solve the problem.
…that there is no reliable data or methods of ascertaining if something is true or not.
…that we have not thought it worthwhile to find out or that we have not been able to find out within the allotted time.
…that we believe this is intrinsically unknowable.
…that this is knowledge we should not seek.

And these are just some examples of what it is that we are possibly saying when we say “I don’t know”. Stating this simple proposition is essentially a way to force a re-examination of the entire issue to find the roots of our ignorance. Saying that we do not know something is a profound statement of epistemology and hence a complex judgment – and not a statement of confidence or probability.

A friend and colleague suggested, on discussing this, that it actually makes for a nice version of the Turing test. When a computer answers a question by saying “I don’t know” and does so embedded in the rich and complex language game of knowledge (as evidenced by it reasoning about it, I assume), it can be seen as intelligent in a human sense.
This socratic variation of the Turing test also shows the importance of the pattern of reasoning, since “I don’t know” is the easiest canned answer to code into a conversation engine.

*

There is a special category of problems related with saying “I don’t know” that have to do with search satisfaction and raise interesting issues. When do you stop looking? In Jeremy Groopman’s excellent book on How Doctors Think there is an interesting example of radiologists. The key challenge for this group of professionals, Groopman notes, is when to stop looking. You scan an x-ray, find pneumonia and … done? What if there is something else? Other anomalies that you need to look for? When do you stop looking?

For a human being that is a question of time limits imposed by biology, organization, workload and cost. The complex nature of the calculation for stopping allows for different stopping criteria over time and you can go on to really think things through when the parameters change. Groopman’s interview with a radiologist is especially interesting given that this is one field that we believe can be automated to great benefit. The radiologist notes this looming risk of search satisfaction and essentially suggests that you use a check schema – trace out the same examination irrespective of what it is that you are looking for, and then summarize the results.

The radiologist, in this scenario, becomes a general search for anomalies that are then classified, rather than a specialized pattern recognition expert that seeks out examples of cancers – and for some cases the radiologist may only be able to identify the anomaly, but without understanding it. In one of the cases in the book the radiologist finds traces of something he does not understand – weak traces – that then prompts him to do a biopsy, not based on the picture itself, but on the lack of anything on a previous x-ray.

Context, generality, search satisfaction and gestalt analysis are all complex parts of when we know and do not know something. And our reactions to a lack of knowledge are interesting. The next step in not knowing is of course questioning.

A machine that answers “I don’t know” and then follows it up with a question is an interesting scenario — but how does it generate and choose between questions? There seems to be a lot to look at here – and question generation born out of a sense of ignorance is not a small part of intelligence either.

Intelligence, life, consciousness, soul (Man / Machine II)

There is another perspective here that we may want to discuss, and that is if the dichotomy we are examining is maybe a false, or at least, less interesting one. What if we find that both man and machine can belong to a broader class of things that we may want to call “alive”? Rather than ask if something is nature or technology we may want to just ask if it lives.

The question of what life is and when it began is of course not an easy one, but if we work with simple definitions we may want to agree that something lives if it has a metabolism and the ability to reproduce. That, then, could cover both machines and humans. Humans – obviously – machines less obviously, but still solidly.

When we discuss artificial intelligence, our focus is on the question of if something can be said to have human-level intelligence. But what if we were to argue that nothing can be human-kind intelligent without also being alive? Without suffering under the same limitations and evolutionary pressures as we do?

Does this seem an arbitrary limitation? Perhaps, but it is no less arbitrary than the idea that intelligence is exhibited only through problem solving methods such as playing chess or go.

Can something be, I would ask, intelligent and not alive? In this simple question there is something fundamental captured. And if we say yes – then would it not seem awkward to imagine a robot to be intelligent but essentially dead?

This conceptual scheme – life / intelligence – is one that is being afforded far too little attention. Max Tegmark’s brilliant book on Life 3.0 is of course an exception, but even here it is just assumed that life is life even if it transcends the limitations (material and psychological) of life as we know it. Life is thought to be immanent in intelligence, and the rise of artificial intelligence is equated with the emergence of a new form of life.

But that is not a necessary relationship at all. One does not imply the other. And to make it more difficult, we could also examine the notoriously unclear concept of “consciousness” as a part of the exploration.

Can something be intelligent, dead and conscious? Can something be conscious and not live? Intelligent, but not conscious? The challenge that we face when we analyze our distinction between man and machine in this framework is that we are forced to think about the connection between life and intelligence in a new way, I think.

Man is alive, conscious and intelligent. Can a machine be all three and still be a machine?

We are scratching the surface here of a problem that Wittgenstein formulated much more clearly; in the second part of philosophical investigations he asks if we can see a man as a machine, an automaton. It is a question with some pedigree in philosophy since Descartes asked the same when we tried out his systematic doubt — looking out through his window he asked if he could doubt that the shapes he saw were fellow humans and his answer was that indeed, they could be automatons wearing clothes, mechanical men and nothing else.

Wittgenstein notes that this is a strange concept, and that we must agree that we would not call a machine thinking unless we adopted an attitude towards this machine that is essentially an attitude as if towards a soul. Thinking is not a disembodied concept. It is something we say of human beings, and a machine that could think would need to be very much like a man, so much so that we would have an attitude like that towards a soul, perhaps. Here is his observation (Philosophical Investigations part II: iv):

“Suppose I say if a friend: ‘He is not an automaton’. — What information is conveyed by this, and to whom would it be information? To a human being who meets him in ordinary circumstances? What information could it give him? (At the very most that this man always behaves like a human being and not occasionally like a machine.)

‘I believe that he is not an automaton’,  just like that, so far makes no sense.

My attitude towards him is an attitude towards a soul. I am not of the opinion that he has a soul.” (My bold).

The German makes the point even clearer, I think: “Meine Einstellung zu ihm ist eine Einstellung zur Seele. Ich habe nicht die Meinung dass er eine Seele hat.”  So for completeness we add this to our conceptual scheme: intelligence / life / consciousness / soul – and ask when a machine becomes a man?

As we widen our conceptual net, the questions around artificial intelligence become more interesting. And what Wittgenstein also adds is that for the more complex language game, there are no proper tests. At some point our attitudes change.

Now, the risk here, as Dennett points out, is that this shift comes too fast.