Jottings II: Style of play, style of thought – human knowledge as a collection of local maxima

Pursuant to the last note, it is interesting to ask the following question: if human discovery of a game space like the one in go centers around what could be a local maxima, and computers can help us find other maxima and so play in an “alien” way — i.e. a way that is not anchored in human cognition and ultimately perhaps in our embodied, biological cognition — should we then not expect the same to be true for other bodies of thought?

Let’s say that a “body of thought” is the accumulated games in any specific game space, and that we agree we have discovered that human-anchored “bodies of thought” seem to be quietly governed by our human nature — is the same then true for philosophy? Anyone reading a history of philosophy is struck by the way concepts, ideas, arguments and methods of thinking reminds you of different games in a vast game space. We don’t even need to deploy Wittgenstein’s notion of language games to see the fruitful application of that analogy across different domains of knowledge.

Can, then, machine learning help us discover “alien” bodies of thought in philosophy? Or is there a requirement that a game space can be reduced to a set of formalized rules? If so – imagine a machine programmed to play Herman Hesse’s glass bead game, how would that work out?

In sum: have we underestimated the limiting effect on thinking across domains that our nature has? The real risk that what we hail as human knowledge and achievement is a set of local maxima?


Jottings I: What does style of play tell us?

If we examine the space of all possible chess games we should be able to map out all games a really played look at how they are distributed in the game space (what are the dimensions of a game space, though?). It is possible that these games cluster in different ways and we could then term these clusters “styles” of play. We at least have a naive understanding of what this would mean.

But what about the distribution of these clusters overall in a game space – are they equally distributed? Are they parts of mega clusters that describe “human play”, clusters that orient around some local optimum? And if so, do we now have tools to examine other mega clusters around other optima?

Is there a connection to non-ergodicity here? A flawed image: game style as collections of non-ergodic paths (how could paths be non-ergodic?) in a broader space? No. But there is something here – a question about why we traverse probabilities in certain ways, why we cluster, the role of human nature and cognition.The science fiction theme of cognitive clusters so far a part that they cannot connect. Styles that are truly, and necessarily alien.

How would we answer a question about how games are distributed in a game space? Surely this has been done. Strategies?

Authority currencies and rugged landscapes of truth (Fake News Notes #9)

One model for thinking about the issue of misinformation is to say that we are navigating a flat information desert, where there is no topology of truth available. Now hills of fact, no valleys of misinformation. Our challenge is to figure out a good way to add a third dimension, or more than one single dimension to the universe of news, or information.

How would one do this? There are obvious ways like importing trust from an off-line brand or other off-line institution. When we read the New York Times on the web we do so under the reflected light of that venerable institution off-line and we expect government websites to carry some of the authority government agencies do – something that might even be signalled through the use of a specific top-level domain, like .gov.

But are there new ways? New strategies that we could adopt?

One tempting, if simplistic model, is to cryptographically sign pieces of information. Just like we can build a web of trust by signing each-others signatures, we may be able to “vouch” for a piece of information or a source of information. Such a model would be open to abuse, however: it is easy to imagine sources soliciting signatures based on political loyalty rather than factual content – so that seems to be a challenge that would have to be dealt with.

Another version of this is to sign with a liability — meaning that a newspaper might sign a piece of news with a signature that essentially commits them to fully liability for the piece should it be wrong or flawed from a publicist standpoint. This notion of declared accountability would be purely economic and might work to generate layers within our information space. If we wanted too, we could ask to see only pieces that were backed up by a liability acceptance of, say, 10 million USD. The willingness to be sued or attacked over the content would then create a kind of topology of truth entirely derived from the levels of liability the publishing entity declared themselves willing to absorb.

A landscape entirely determined of liability has some obvious weaknesses – would it not be the same as saying that truth equals wealth? Well, not necessarily – it is quite possible to take a bet that will ruin you, hence allowing for smaller publishers who are really sure of their information to take on liability beyond their actual financial means. In fact, the entire model looks a little like placing bets on pieces of news or information – declaring that we are betting that it is true and are happy to take anyone on that bets that we have published something that is fake. But still, the connection with money will make people uneasy – even though, frankly, classical publicist authority is underpinned by a financial element as well. In this new model that could switch from legal entities to individuals.

That leads us on to another idea – the idea of an “authority currency”. We could imagine a world in which journalists accrued authority over time, by publishing pieces that were found to be accurate and fair reporting. The challenge, however, is the adjudication of the content. Who gets to say that a piece should generate authority currency for someone? If we say “everyone” we end up with the populist problem of political loyalty trumping factual accuracy, so we need another mechanism (although it is tempting to use Patreon payments as a strong signal in such a currency – if people are willing to pay for the content freely it has to have had some qualities). If we say “platforms” we end up with the traditional question of why we should trust platforms. If we say “publishers” they end up marking their own homework. If we say “the state” we are slightly delusional. Can we, then, imagine a new kind of institution or mechanism that could do this?

I am not sure. What I do feel is that this challenge – of moving from the flat information deserts to the rugged landscapes of truth – highlights some key difficulties in the work on misinformation.

Innovation III: What is the price of a kilo of ocean plastic?

A thought experiment. What would happen if we crowdsourced a price – not just a sum – per kilo of ocean plastic retrieved? This would require solving a few interesting problems along the way but would not be impossible.

First, we would need to develop a means to crowdsourced prices rather than sums. What we would then need to do is to require the contributors to pay a part of some price – per kilo, hour etc – and define some upper limit for their engagement. This would of course equate to a sum, but the point would be to highlight that the crowd is setting a price, not collecting a sum.

Second, we would need to be able to verify the goods or services bought. How would we, for example, determine if a kilo of ocean plastic really is from the ocean? This may require a few process innovations but surely is not impossible.

With these problems solved we can start asking interesting questions. At what price do we begin seeing progress? At what price may we solve the problem in it’s entirety?

What if we committed to paying 150, 1500, 15000 USD a kilo of ocean plastic? At what point does this turn into a natural resource to be mined like any other? At what time do oil companies start filtering the ocean for plastic?

This also suggests that we should also examine moving from innovation prizes to innovation prices.

Future of work – second take

When we speak about the future of work we often do this: we assume that there will be a labor market much like today, and that there will be jobs like the ones we have today, but that they will just be different jobs. It is as if we think we are moving from wanting bakers to wanting more doctors, and well, what should the bakers do? It is really hard to become a doctor!

There are other possible perspectives, however. One is to ask how both the market and the jobs will change under a new technological paradigm.
First, the markets should become much faster at detecting new tasks and the skills needed to perform them. Pattern scans across labor information markets make it possible to construct a kind of “skills radar” that will allow for us to tailor and offer new skills much like you are recommended new movies when you use Netflix. Not just “Others with your title are studying this” but also “Others on a dynamic career trajectory are looking into this”. We should be able to build skill forecasts that are a lot like weather forecasts, and less like climate forecasts.

Second, we should be able to distinguish surface skills and deep skills — by mining data about labor markets we should be able to understand what general cognitive skills underpin the surface skills that we need to deal with what changes faster. Work has layers – using Excel is a surface skill, being able to abstract a problem into a lattice of mental models is a deep skill. Today we assume a lot about these deep skills – that they have to do with problem solving and mental models, for example, but we do not know yet.

Now, if we turn to look at the jobs themselves. A few things suggest themselves.

First, jobs today are bundles of tasks – and social status and insurance and so on. These bundles are wholly put together by a single employer who will guesstimate what kinds of skills they need and then hire for those assumed skills. This is not the only possible way to bundle tasks. You could imagine using ML to ask what skills are missing across the organisation and generate new jobs on the basis of those skills; there may well be hidden jobs – unexpected bundle of skills – that would improve your organisation immeasurably!

Second, the cost of assessing and bundling tasks is polarised. It is either wholly put on the employer, or – in the gig economy – on the individual worker. This seems arbitrary. Why shouldn’t we allow for new kinds of jobs that bundle tasks from Uber, Lyft and others and adds on a set of insurances to create a job? A platform solution for jobs would essentially allow you to generate jobs out of available tasks – and perhaps even do so dynamically so that you can achieve greater stability of the flow of tasks, and hence more economic value out of the bundle than the individual tasks. This latter point is key to building in social benefits and insurance solutions into the new “job”.

Third, it will be important to follow the evolution of centaur jobs. These may just be jobs where you look for someone who is really good at working with one set of neural networks or machine learning systems of a certain kind. These will, over time, become so complex as to almost exhibit “personalities” of different kinds – and you may temperamentally or otherwise be a better fit for some of these systems than others. It is also not impossible that AI/ML systems follow the individual to a certain degree – that you offer the labor market you centaur joint labor.

Fourth, jobs may be collective and collaborative and you could hire for collective skills that today you need to combine yourselves. As coordination costs sink you can suddenly build new kinds of “macro jobs” that need to be performed by several individuals AND systems. The 1:1 relationship between an individual and a job may well dissolve.

The future of work short term lies in the new jobs we need on an existing market, long term we should look more into the changing nature of both those jobs and those markets to understand where we might want to move things. The way things are working now are also part of what was once an entirely new and novel way to think about things.

Innovation and evolution I: Speciation rates and innovation rates

As we explore analogies between innovation and evolution, there are some concepts that present intriguing questions. The idea of a speciation rate is one of these concepts and it allows us to ask questions about the pace of innovation in new ways.

Are speciation rates constant or rugged? That is: should we expect bursts of innovation at certain points? Cambrian explosions seem different from purely vertical evolution, from single cell to multi-cell etcetera.

Are speciation rates related to extinction rates? Will increases in extinction rates trigger increases in speciation? If these are entirely decoupled in a system it will have states with high extinction / low speciation that can be existentially threatening if they persist for too long. And what is extinction in innovation?

Are there measures of technical diversity alongside biological diversity and if so what it is that these measure?

Food for thought.