Models of speech (Fake News Notes XI)

One thing that has been occupying me recently is the question of what speech is for. In some senses this is a heretical question – many would probably argue that speech is an inalienable right, and so it really does not have to be for anything at all. I find that unconvincing, especially in a reality where we need to balance speech against a number of other rights. I also find it helpful to think through different mental models of speech in order to really figure out how they come into conflict with each-other.

Let me offer two examples of such models and what function they have speech serving – they are, admittedly, simplified, but they tell an interesting story that can be used to understand and explore part of the pressure that free expression and speech is under right now.

The first model is one in which the primary purpose of speech is discovery. It is through speech we find and develop different ideas in everything from art to science and politics. The mental model I have in mind here is a model of “the marketplace of ideas”. Here the discovery and competition between ideas is the key function of speech.

The second model is one in which speech is the means through which we deliberate in a democracy. It is how we solve problems, rather than how we discover new ideas. The mental model I have in mind here is Habermas’ public sphere. Here speech is collaborative and seeks solutions from commonly agreed facts.

So we end up with, in a broad strokes, coarse grained kind of way, these two different functions: discovery and deliberation.

Now, as we turn to the Internet and ask how it changes things, we can see that it really increases discovery by an order of magnitude – but that it so far seems to have done little (outside of the IETF) to increase our ability to deliberate. If we now generalise a little bit and argue that Europeans think of speech as deliberative and Americans think of speech as discovery, we see a major fault line open up between those different perspectives.

This is not a new insight. One of the most interesting renditions of this is something we have touched on before – Simone Weil’s notion of two spheres of speech. In the first sphere anything would allowed and absolutely no limitations allowed. In the second sphere you would be held accountable for the opinions you really intended to advance as your own. Weil argued that there was a clear, and meaningful, difference between what one says and what one means.

The challenge we have is that while technology has augmented our ability to say things, it has not augmented our ability to mean them. The information landscape is still surprisingly flat, and no particular rugged landscapes seem to be available for those who would welcome a difference between the two modes of speech. But that should not be impossible to overcome – in fact, one surprising option that this line of argument seems to suggest is that we should look to technical innovation to see how we can create much more rugged information landscapes, with clear distinctions between what you say and what you mean.

*

The other mental model that is interesting to examine more closely is the atomic model of speech, in which speech is considered mostly as a set of individual propositions or statements. The question of how to delineate the rights of speech then becomes a question of adjudicating different statements and determine which ones should be deemed legal and which ones should be deemed illegal, or with a more fine-grained resolution – which ones should be legal, which ones should be removed out of moral concerns and which ones can remain.

The atom of speech in this model is the statement or the individual piece of speech. This propositional model of speech has, historically, been the logical way to approach speech, but with the Internet there seems to be an alternative and complimentary model of speech that is based on patterns of speech rather than individual pieces. We have seen this emerge as a core individual concern in a few cases, and then mostly to identify speakers who through a pattern of speech have ended up being undesirable on a platform or in a medium. But patterns of speech should concern us even more than they do today.

Historically we have only been concerned with patterns of speech when we have studied propaganda. Propaganda is a broad-based pattern of speech where all speech is controlled by a single actor, and the resulting pattern is deeply corrosive, even if individual pieces of speech may still be fine and legitimate. In propaganda we care also about that which is being suppressed as well as what is being fabricated. And, in addition to that, we care about the dominating narratives that are being told because they create background against which all other statements are interpreted. Propaganda, Jacques Ellul teaches us, always comes from a single center.

But the net provides a challenge here. The Internet makes possible a weird kind of poly-centric propaganda that originates in many different places, and this in itself lends the pattern credibility and power. The most obvious example of this is the pattern of doubt that increasingly is eroding our common baseline of facts. This pattern is problematic because it contains no single statement that is violative, but ity opens up our common shared baseline of facts to completely costless doubt. That doubt has become both cheap to produce and distribute is a key problem that precedes that of misinformation.

The models we find standing against each-other here can be called the propositional model of speech and the pattern model of speech. Both ask hard questions, but in the second model the question is less about which statements should be judged to be legal or moral, and more about what effects we need to look out for in order to be able to understand the sum total effect of the way speech affects us.

Maybe one reason we focus on the first model is that it is simpler; it is easier to debate and discuss if something should be taken down based on qualities inherent in that piece of content, than to debate if there are patterns of speech that we need to worry about and counter act.

Now, again, coming back to the price of doubt I think we can say that the price of doubt is cheap, because we operate in an entirely flat information landscape where doubt is equally cheap for all statements. There is no one imposing a cost on you for doubting that we have been to the moon, that vaccines work or any other thing that used to be fairly well established.

You are not even censured by your peers for this behaviour anymore, because we have, oddly, come to think of doubt as a virtue in the guise of “openness”. Now, what I am saying is not that doubt is dangerous or wrong (cue the accusations about a medieval view of knowledge), but that when the pendulum swings the other way and everything is open to costless doubt, we lose something important that binds us together.

Patterns of speech – perhaps even a weaker version, such as tone of voice, – remain interesting and open areas to look at more closely as we try to assess the functions of speech in society.

*

One last model is worthwhile looking closer at, and that is the model of speech as a monologic activity. When we speak about speech we rarely speak about listeners. There are several different possibilities here to think carefully about the dialogic nature of speech, as this makes speech into a n-person game, rather than a monologic act of speaking.

As we do that we find that different pieces of speech may impact and benefit different groups differently. If we conceive of speech as an n-person game we can, for example, see that anti-terrorist researchers benefit from pieces of speech that let the study terrorist groups closer, that vulnerable people who have been radicalised in different ways may suffer from exposure to that same piece of speech and that politicians may gain in stature and importance from opposing that same piece off speech.

The pieces of speech we study become more like moves on a chess board with several different players. A certain speech act may threaten one player, weaken another and benefit a third. If we include counter speech in our model, we find that we are sketching out the early stages of speech as a game that can be played.

This opens up for interesting ideas, such as can we find an optimisation criterion for speech and perhaps build a joint game with recommendation algorithms, moderator functions and different consumer software speech and play that game a million times to find strategies for moderating and recommending content that fulfil that optimisation criterion?

Now, then, what would that criterion be? If we wanted to let an AI play the Game of Speech – what would we ask that it optimise? How would we keep score? That is an intriguing question, and it is easy to see that there are different options: we could optimise for variance in the resulting speech our for agreement or for solving any specific class of problems or for learning (as measured by accruing new topics and discussing new things?).

Speech as Game is an intriguing model that would take some flushing out to be more than an interesting speculative thought experiment – but it could be worth a try.

The noble and necessary lie (Fake News Notes X)

Plato’s use of the idea of a noble lie was oppressive. He wanted to tell the people a tale of their origin that would encourage them to bend and bow to the idea of a stratified society, and he suggest that this would make everyone better off — and we clearly see that today for what it was: a defense for a class society that kept a small elite at the top, not through meritocracy or election, but through narrative.

But there is another way to read this notion of  a foundational myth, and that is to read it as that “common baseline of facts” that everyone is now calling for. This “common baseline” is often left unexplained and taken for granted, but the reality is that with the amount of information and criticism and skepticism that we have today, such a baseline will need to be based on a “suspension of disbelief”, as William Davies suggests:

Public life has become like a play whose audience is unwilling to suspend disbelief. Any utterance by a public figure can be unpicked in search of its ulterior motive. As cynicism grows, even judges, the supposedly neutral upholders of the law, are publicly accused of personal bias. Once doubt descends on public life, people become increasingly dependent on their own experiences and their own beliefs about how the world really works. One effect of this is that facts no longer seem to matter (the phenomenon misleadingly dubbed “post-truth”). But the crisis of democracy and of truth are one and the same: individuals are increasingly suspicious of the “official” stories they are being told, and expect to witness things for themselves.

[…] But our relationship to information and news is now entirely different: it has become an active and critical one, that is deeply suspicious of the official line. Nowadays, everyone is engaged in spotting and rebutting propaganda of one kind or another, curating our news feeds, attacking the framing of the other side and consciously resisting manipulation. In some ways, we have become too concerned with truth, to the point where we can no longer agree on it. The very institutions that might once have brought controversies to an end are under constant fire for their compromises and biases.

The challenge here is this: if we are to arrive at a common baseline of facts, we have to accept that there will be things treated as facts that we will come to doubt and then to disregard as they turn out to be false. The value we get for that is that we will be able to start thinking together again, we will be able to resurrect the idea of a common sense.

So, maybe the problem underlying misinformation and desinformation is not that we face intentionally false information, but that we have indulged too much in a skepticism fueled by a wealth of information and a poverty of attention? We lack a mechanism for agreeing on what we will treat as true, rather than how we will agree on what is – in any more ontological sense – true.

The distinction between a common baseline of facts and a noble lie is less clear in that perspective. A worrying idea, well expressed in Mr Davies’ essay. But the conclusion is ultimately provocative, and perhaps disappointing:

The financial obstacles confronting critical, independent, investigative media are significant. If the Johnson administration takes a more sharply populist turn, the political obstacles could increase, too – Channel 4 is frequently held up as an enemy of Brexit, for example. But let us be clear that an independent, professional media is what we need to defend at the present moment, and abandon the misleading and destructive idea that – thanks to a combination of ubiquitous data capture and personal passions – the truth can be grasped directly, without anyone needing to report it.

But why would the people cede the mechanism of producing truth back to professional media? What is the incentive? Where the common baseline of facts or the noble lie will sit in the future is far from clear, but it seems unlikely that it will return to an institution that has once lost grasp of it so fully. If the truth cannot be grasped directly – if that indeed is socially dangerous and destructive – we need to think carefully about who we allow the power to curate that new noble lie (and no, it should probably not be corporations). If we do not believe that the common baseline is needed anymore, we need new ways to approach collective decision making — an intriguingly difficult task.

 

Authority currencies and rugged landscapes of truth (Fake News Notes #9)

One model for thinking about the issue of misinformation is to say that we are navigating a flat information desert, where there is no topology of truth available. Now hills of fact, no valleys of misinformation. Our challenge is to figure out a good way to add a third dimension, or more than one single dimension to the universe of news, or information.

How would one do this? There are obvious ways like importing trust from an off-line brand or other off-line institution. When we read the New York Times on the web we do so under the reflected light of that venerable institution off-line and we expect government websites to carry some of the authority government agencies do – something that might even be signalled through the use of a specific top-level domain, like .gov.

But are there new ways? New strategies that we could adopt?

One tempting, if simplistic model, is to cryptographically sign pieces of information. Just like we can build a web of trust by signing each-others signatures, we may be able to “vouch” for a piece of information or a source of information. Such a model would be open to abuse, however: it is easy to imagine sources soliciting signatures based on political loyalty rather than factual content – so that seems to be a challenge that would have to be dealt with.

Another version of this is to sign with a liability — meaning that a newspaper might sign a piece of news with a signature that essentially commits them to fully liability for the piece should it be wrong or flawed from a publicist standpoint. This notion of declared accountability would be purely economic and might work to generate layers within our information space. If we wanted too, we could ask to see only pieces that were backed up by a liability acceptance of, say, 10 million USD. The willingness to be sued or attacked over the content would then create a kind of topology of truth entirely derived from the levels of liability the publishing entity declared themselves willing to absorb.

A landscape entirely determined of liability has some obvious weaknesses – would it not be the same as saying that truth equals wealth? Well, not necessarily – it is quite possible to take a bet that will ruin you, hence allowing for smaller publishers who are really sure of their information to take on liability beyond their actual financial means. In fact, the entire model looks a little like placing bets on pieces of news or information – declaring that we are betting that it is true and are happy to take anyone on that bets that we have published something that is fake. But still, the connection with money will make people uneasy – even though, frankly, classical publicist authority is underpinned by a financial element as well. In this new model that could switch from legal entities to individuals.

That leads us on to another idea – the idea of an “authority currency”. We could imagine a world in which journalists accrued authority over time, by publishing pieces that were found to be accurate and fair reporting. The challenge, however, is the adjudication of the content. Who gets to say that a piece should generate authority currency for someone? If we say “everyone” we end up with the populist problem of political loyalty trumping factual accuracy, so we need another mechanism (although it is tempting to use Patreon payments as a strong signal in such a currency – if people are willing to pay for the content freely it has to have had some qualities). If we say “platforms” we end up with the traditional question of why we should trust platforms. If we say “publishers” they end up marking their own homework. If we say “the state” we are slightly delusional. Can we, then, imagine a new kind of institution or mechanism that could do this?

I am not sure. What I do feel is that this challenge – of moving from the flat information deserts to the rugged landscapes of truth – highlights some key difficulties in the work on misinformation.

Weil’s paradox: intention and speech (Fake News Notes #8)

Simone Weil, in her curious book Need for Roots, notes the following on the necessity for freedom of opinion:

[…] it would be desirable to create an absolutely free reserve in the field of publication, but in such a way as for it to be understood that the works found therein did not pledge their authors in any way and contained no direct advice for readers. There it would be possible to find, set out in their full force, all the arguments in favour of bad causes. It would be an excellent and salutary thing for them to be so displayed. Anybody could there sing the praises of what he most condemns. It would be publicly recognized that the object of such works was not to define their authors’ attitudes vis-à-vis the problems of life, but to contribute, by preliminary researches, towards a complete and correct tabulation of data concerning each problem. The law would see to it that their publication did not involve any risk of whatever kind for the author.

Simone Weil, Need for Roots, p. 22

She is imagining here a sphere where anything can be said, any view expressed and explored, all data examined — and it is interesting that she mentions data, because she is aware that a part of the challenge is not just what is said, but what data is collected and shared on social problems. But she also recognizes that such a complete free space needs to be distinguished from the public sphere of persuasion and debate:

On the other hand, publications destined to influence what is called opinion, that is to say, in effect, the conduct of life, constitute acts and ought to be subjected to the same restrictions as are all acts. In other words, they should not cause unlawful harm of any kind to any human being, and above all, should never contain any denial, explicit or implicit, of the eternal obligations towards the human being, once these obligations have been solemnly recognized by law.

Simone Weil, Need for Roots, ibid.

This category – “publications destined to influence what is called opinion”, she wants to treat differently. Here she wants the full machinery of not just law, but also morals, to apply. Then she notes, wryly one thinks, that this will present some legal challenges:

The distinction between the two fields, the one which is outside action and the one which forms part of action, is impossible to express on paper in juridical terminology. But that doesn’t prevent it from being a perfectly clear one.

Simone Weil, Need For Roots, ibid.

This captures in a way the challenge that face platforms today. The inability to express this legally is acutely felt by most that study the area, and Weil’s articulation of the two competing interests – free thought and human responsibility – is clean and clear.

Now, the question is: can we find any other way to express this than in law? Are there technologies that could help us here? We could imagine several models.

One would be to develop a domain for the public sphere, for speech that intends to influence. To develop an “on the record”-mode for the flat information surfaces of the web. You could do this trivially by signing your statement in different ways, and statements could be signed by several different people as well – the ability to support a statement in a personal way is inherent in the often cited disclaimers on Twitter — where we are always told that RT does not equal endorsement. But the really interesting question is how we do endorse something, and if we can endorse statements and beliefs with different force.

Imagine a web where we could choose not just to publish, but publish irrevocably (this is for sure connected with discussions around blockchain) and publish with the strength of not just one individual, but several. Imagine the idea that we could replicate editorial accountability not just in law, but by availing those that seek it of a mode of publishing, a technological way of asserting their accountability. That would allow us to take Weil’s clear distinction and turn it into a real one.

It would require, of course, that we accept that there is a lot of “speech” – if we use that as the generic term for the first category of opinion that Weil explores – we disagree with. But we would be able to hold those that utter “opinions” – the second category, speech intended to influence and change minds – accountable.

One solution to the issue of misinformation or disagreeable information or speech is to add dimensionality to the flat information surfaces we are interacting with today.

Real and unreal news (Notes on attention, fake news and noise #7)

What is the opposite of fake news? Is it real news? What, then, would that mean? It seems important to ask that question, since our fight against fake news also needs to be a fight _for_ something. But this quickly becomes an uncomfortable discussion, as evidenced by how people attack the question. When we discuss what the opposite of fake news is we often end up defending facts – and we inevitably end up quoting senator Moynihan, smugly saying that everyone has a right to their opinions, but not to their facts. This is naturally right, but it ducks the key question of what a fact is, and if it can exist on its own.

Let’s offer an alternative view that is more problematic. In this view we argue that facts can only exist in relationship to each-other. They are intrinsically connected in a web of knowledge and probability, and this web exists in a set of ontological premises that we call reality. Fake news – we could then argue – can exist only because we have lost our sense of a shared reality.

We hint at this when we speak of “a baseline of facts” or similar phrases (this phrase was how Obama referred to the challenge when interviewed by David Letterman recently), but we stop shy off admitting that we ultimately are caught up in a discussion about fractured reality. Our inability to share a reality creates the cracks, the fissures and fragments in which truth disappears.

This view has more troubling implications, and immediately should lead us to also question the term “fake news”, since the implication is clear – something can only be fake if there exists a reality against we can share it. The reason the term “fake news” is almost universally shunned by experts and people analyzing the issue is exactly this: it is used by different people to attack what they don’t like. We see leaders labeling news sources as “fake news” as a way to demarcate against a way to render the world that they reject. So “fake” comes to mean “wrong”.

Here is a key to the challenge we are facing. If we see this clearly – that what we are struggling with is not fake vs real news, but right vs wrong news, we also realize that there are no good solutions for the general problem of what is happening with our public discourse today. What we can find are narrow solutions for specific problems that are well-described (such as actions against deliberately misleading information from parties that deliberately mis-represent themselves), but the general challenge is quite different and much more troubling.

We suffer from a lack of shared reality.

This is interesting from a research standpoint, because it forces to ask the question of how a society constitutes a reality, and how it loses it. Such an investigation would need to touch on things like reality TV, the commodification of journalism (a la Adorno’s view of music – it seems clear that journalism has lost its liturgy). One would need to dig into and understand how truth has splintered and think hard about how our coherence theories of truth allow for this splintering.

It is worthwhile to pause on that point a little: when we understand the truth of a proposition to be its coherence with a system of other propositions, and not correspondence with an underlying ontologically more fundamental level, we open up for several different truths as long as you can imagine a set of coherent systems of propositions built on a few basic propositions – the baseline. What we have discovered in the information society is that the natural size of this necessary baseline is much smaller than we thought. The set of propositions we need to create alternate realities but not seem entirely insane is much smaller than we may have believed. And the cost for creating an alternate reality is sinking as you get more and more access to information as well as the creativity of others engaged in the same enterprise.

There is a risk that we underestimate the collaborative nature of the alternative realities that are crafted around us, the way they are the result of a collective creative effort. Just as we have seen the rise of massive open online courses in education, we have seen the rise of what we could call the massive open online conspiracy theories. They are powered by, and partly created in the same way — with the massive open online role playing games in a nice and interesting middle position. In a sense the unleashed creativity of our collaborative storytelling is what is fracturing reality – our narrative capacity has exploded the last decades.

So back to our question. The dichotomy we are looking at here is not one between fake and real news, or right and wrong news (although we do treat it that way sometimes). It is in a sense a difference between real and unreal news, but with a plurality of unrealities that we struggle to tell apart. There is no Archimedes’ point that allows us to lift the real from the fake, not bedrock foundation, as reality itself has been slowly disassembled over the last couple of decades.

A much more difficult question, then, becomes if we believe that we want a shared reality, or if we ever had one? It is a recurring theme in songs, literature and poetry – the shaky nature of our reality – and the courage needed to face it. In the remarkable song “Right Where It Belongs” this is well expressed by Nine Inch Nails (and remarkably rendered in this remix (we remix reality all the time)):

See the animal in his cage that you built
Are you sure what side you’re on?
Better not look him too closely in the eye
Are you sure what side of the glass you are on?
See the safety of the life you have built
Everything where it belongs
Feel the hollowness inside of your heart
And it’s all right where it belongs

What if everything around you
Isn’t quite as it seems?
What if all the world you think you know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself find yourself afraid to see?

What if all the world’s inside of your head?
Just creations of your own
Your devils and your gods all the living and the dead
And you really oughta know
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the ones
Are you hiding in the trees?

What if everything around you
Isn’t quite as it seems?
What if all the world you used to know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself, find yourself afraid to see?

The central insight in this is one that underlies all of our discussions around information, propaganda, disinformation and misinformation, and that is the role of our identity. We exist – as facts – within the realities we dare to accept and ultimately our flight into alternate realities and shadow worlds is an expression of our relationship to ourselves.

Hannah Arendt on politics and truth – and fake news? (Notes on attention, fake news and noise #6)

Any analysis of fake news would be incomplete without a reading of Hannah Arendts magnificent essay Truth and Politics from 1967. Arendt, in this essay, examines carefully the relationship between truth and politics, and makes a few observations that remind us of why the issue of “fake news” is neither new nor uniquely digital. It is but an aspect of that greater challenge of how we reconcile truth and politics.

Arendt anchors the entire discussion solidly not only in a broader context, but she reminds us that this is a tension that has been with civilization since Socrates. “Fake news” is nothing else than yet another challenge that meets us in the gap between dialectic and rhetoric, and Socrates would be surprised and dismayed to find us thinking we had discovered a new phenomenon. The issue of truth in politics is one that has always been at the heart of our civilization and our democratic tradition.
Arendt notes this almost brutally in the beginning of her essay:

“No one has ever doubted that truth and politics are on rather bad terms with each other, and no one, as far as I know, has ever counted truthfulness among the political virtues. Lies have always been regarded as necessary and justifiable tools not only of the politician’s and the demagogue’s but also of the stateman’s trade.” (p 223)

It is interesting to think about how we read Arendt here. Today, as politics is under attack and we suffer from an increase of rhetoric and the decline of dialogue, we almost immediately become defensive. We want to say that we should not deride politics, and that politics deserve respect and that we should be careful and ensure that we do not further increase people’s loss of faith in the political system of democracy — and all of this is both correct and deeply troubling at the same time. It shows us that our faith in the robustness of the system has suffered so many blows now that we shy away from the clear-eyed realization that politics is rhetoric first and dialogue only second (and bad politics never gets to dialogue at all).

Arendt does not mean to insult our democracy, she merely recognizes a philosophical analysis that has remained constant over time. She quotes Hobbes as saying that if power depended on the sum of the angles in a triangle not being equal to the sum of two angles in a rectangle, then books of geometry would be burned by some in the streets. This is what politics is – power – and we should not expect anything else. That is why the education of our politicians is so important, and their character key. Socrates sense of urgency when he tries to educate Alcibiades is key, and any reader who read the dialogues would be aware of the price of Socrates failure in what Alcibiades became.

Arendt also makes an interesting point on the difference between what she calls rational truths – the mathematical, scientific – and the factual ones and point out that the latter are “much more vulnerable”. (p 227) And factual truth is the stuff politics are made of, she notes.

“Dominion (to speak Hobbes’ language) when it attacks rational truth oversteps, as it were, its domain while it gives battle on its own ground when it falsifies or lies away facts.” (p 227)

Facts are fair game in politics, and has always been. And Arendt then makes an observation that is key to understanding our challenges and is worth quoting in full:

“The hallmark of factual truth is that its opposite is neither error nor illusion nor opinion, not one of which reflects upon personal truthfulness, but the deliberate falsehood, or lie. Error, of course, is possible, and even common, with respect to factual truth, in which case this kind of truth is in no way different from scientific or rational truth. But the point is that with respect to facts there exists another alternative, and this alternative, the deliberate falsehood, does not belong to the same species as propositions that, whether right or mistaken, intend nor more than to say what is, or how something that is appears to me. A factual statement – Germany invaded Belgium in August 1914 – acquires political implications only by being put in an interpretative context. But the opposite proprosition, which Clemenceau, still unacquainted with the art of rewriting history, though absurd, needs no context to be of political significance. It is clearly an attempt to change the record, and as such it is a form of _action_. The same is true when the liar, lacking the power to make his falsehood stick, does not insist on the gospel truth of his statement but pretends that this is his ‘opinion’ to which he claims his constitutional right. This is frequently done by subversive groups, and in a politically immature public the resulting confusion can be considerable. The blurring of the dividing line between factual truth and opinion belongs among the many forms that lying can assume, all of which are forms of action.
While the liar is a man of action, the truthteller, whether he tells a rational or factual truth, most empathically is not.” (p 245)

Arendt is offering an analysis of our dilemma in as clear a way as can be. Lying is an action, telling the truth is most emphatically not, and the reduction of a falsehood to an opinion creates considerable confusion, to say the least. The insight that telling the truth is less powerful than lying, less of an action is potentially devastating – liars has something at stake, and truth tellers sometimes make the mistake of thinking that relaying the truth in itself is enough.

But Arendt also offers a solution and hope — and it is evident even in this rather grim quote: she speaks of a politically immature public, and as she closes the essay she takes great pains to say that these lies, these falsehoods, in no way detracts from the value of political action. In fact, she says that politics is a great endeavor and one that is worthy of our time, effort and commitment – but ultimately we also need to recognize that it is limited by truth. Our respect – as citizens – for truth is what preserves, she says, the integrity of the political realm.

As in the platonic dialogues, as in Hobbes, as everywhere in history – truth is a matter of character. Our own character, honed in dialogue and made resistant to the worst forms of rhetoric. This is not new – and it is not easy, and cannot be solved with a technical fix.

Link: https://idanlandau.files.wordpress.com/2014/12/arendt-truth-and-politics.pdf

Notes on attention, fake news and noise #5: Are We Victims of Algorithms? On Akrasia and Technology.

Are we victims of algorithms? When we click on click bait and content that is low quality – how much of the responsibility of that click is on us and how much on the provider of the content? The way we answer that question maybe connected to an ancient debate in philosophy about Akrasia or weakness of will. Why, philosophy asks, do we do things that are not good for us?

Plato’s Socrates has a rather unforgiving answer: we do those things that are not good for us because we lack knowledge. Knowledge, he argues, is virtue. If we just know what is right we will act in the right way. When we click the low quality entertainment content and waste our time it is because we do not know better. Clearly, then, the answer from a platonic standpoint is to ensure that we enlighten each-other. We need a version of digital literacy that allows us to separate the wheat from the chaff, that helps us know better.

In fact, arguably, weakness of will did not exist for Socrates (hence why he is so forbidding, perhaps) but was merely ignorance. Once you know, you will act right.

Aristotle disagreed and his view was we may hold opinions that are short term and wrong and be affected by them, and hence do things that re not good for us. This view, later developed and adumbrated by Davidson, suggests that decisions are often made without the agent considering all possible things that may have a bearing on a choice. Davidson’s definition is something like “If someone has two choices a and b does b knowing that all things considered a would be better than b, but ends up doing b that is akrasia” (not a quote, but a rendering of Davidson). Akrasia then becomes not considering the full set of facts that should inform the choice.

Having one more beer without considering the previous ones, or having one more cookie without thinking about the plate now being empty.

The kind of akrasia we see in the technological space may be more like that. We short term pleasure visavi long term gain. A classical Kahneman / Tversky challenge. How do we govern ourselves?
So, how do we solve that? Can the fight against akrasia be outsourced? Designed in to technology? It seems trivially true that it can, and this is exactly what tools like Freedom and Stayfocusd actually try to do (there are many other versions of course). These apps block of sites or the Internet for a set amount of time, and force you back to focus on what you were doing. They eliminate the distraction of the web – but they are not clearly helping you consume high quality content.

That is a distinction worth exploring.

Could we make a distinction here between access and consumption? We can help fight akrasia at the access level, but its harder to do when it comes to consumption? Like, not buying chocolate so there is none in your fridge, or simple refraining from eating the chocolate in the fridge? It seems easier to do the first – reduce access – rather than control consumption. One is a question of availability, the either of governance. A discrete versus a continuous temptation, perhaps.

It seems easy to fight discrete akrasia, but sorting out continuous akrasia seems much harder.

*

Is it desirable to try? Assume that you could download a technology that would only show you high quality content on the web. Would you then install that? A splinternet provider that offers “qualitative Internet only – no click bait or distractions”. It would not have to be permanent, you could set hours for distraction, or allocate hours to your kids. Is that an interesting product?

The first question you would ask would probably be why you should trust this particular curator. Why should you allow someone else to determine what is high quality? Well, assume that this challenge can be met by outsourcing it to a crowd, where you self-identify values and ideas of quality and you are matched with others of the same view. Assume also, while we are at it, that you can do this without the resulting filter bubble problem, for now. Would you – even under those assumptions – trust the system?

The second question would be how such a system can reflect a dynamic in which the information production rate doubles. Collective curation models need to deal with the challenge of marking an item as ok or not ok – but the largest category will be a third: not rated. A bet on collective curation is a bet on the value of the not curated always being less than the cost of possible distraction. That is an unclear bet, it seems to me.

The third question would be what sensitivity you would have to deviations. In any collectively curated system a certain percentage of the content is till going to be what you consider low quality. How much such content would you tolerate before you ditch the system? How much of content made unavailable, but considered high quality by you, would you accept? How sensitive are you to the smoothing effects of the collective curation mechanism? Both in exclusion and inclusion? I suspect we are much more sensitive than we allow for.

Any anti-akrasia technology based on curation – even collective curation – would have to deal with those issues, at least. And probably many others.

*

Maybe it is worth also thinking about what it says about our view of human nature if we believe that solutions to akrasia need to be engineered. Are we permanently flawed, or is the fight against akrasia something that actually has corona effects in us – character building effects – that we should embrace?

Building akrasia away is different from developing the self-discipline to keep it in check, is it not?

Any problem that can be rendered as an akrasia problem – and that goes, perhaps, even for issues of fake news and similar content related conundrums – needs to be examined in the light of some of these questions, I suspect.

Notes on attention, fake news and noise #4: Jacques Ellul and the rise of polyphonic propaganda part 1

Jacques Ellul is arguably one of the earlier and most consistent technology critics we have. His texts are due for a revival in a time when technology criticism is in demand, and even techno-optimists like myself would probably welcome that, because even if he is fierce and often caustic, he is interesting and thoughtful. Ellul had a lot to say about technology in books like The Technological Society and The Technological Bluff, but he also discussed the effects of technology on social information and news. In his bleak little work Propaganda: The Formation of Men’s Attitudes (New York 1965(1962)) he examines how propaganda draws on technology and how the propaganda apparatus shapes views and opinions in a society. There are many salient points in the book, and quotes that are worth debating.

That said, Ellul is not an easy read or an uncontroversial thinker. Here is how he connects propaganda and democracy, arguing that state propaganda is necessary to maintain democracy:

“I have tried to show elsewhere that propaganda has also become a necessity for the internal life of a democracy. Nowadays the State is forced to define an official truth. This is a change of extreme seriousness. Even when the State is not motivated to do this for reasons of actions or prestige, it is led to it when fulfilling its mission of disseminating information.

We have seen how the growth of information inevitably leads to the need for propaganda. This is truer in a democratic system than in any other.

The public will accept news if it is arranged in a comprehensive system, and if it does not speak only to the intelligence but to the ‘heart’. This means, precisely, that the public wants propaganda, and if the State does not wish to leave it to a party, which will provide explanations for everything (i.e. the truth), it must itself make propaganda. Thus, the democratic State, even if it does not want to, becomes a propagandist State because of trhe need to dispense information. This entails a profound constitutional and ideological transformation. It is, in effect, a State that must proclaim an official, general, and explicit truth. The State can no longer be objective or liberal, but is forced to bring to the overinformed people a corpus intelligentiae.”

Ellul says, in effect that in a noise society there is always propaganda – the question is who is behind it. It is a grim world view in which a State that yields the responsibility to engage in propaganda yields it to someone else.

Ellul comments, partly wryly, that the only way to avoid this is to allow citizens 3-4 hours to engage in becoming better citizens, and reduce the working day to 4 hours. A solution he agrees is simplistic and unrealistic, it seems, and it would require that citizens “master their passions and egotism”.

The view raised here is useful because it clearly states a view that sometime seems to be underlying the debate we are having – that there is a necessity for the State to become an arbiter of truth (or to designate one) or someone else will take that role. The weakness in this view is a weakness that plagues Ellul’s entire analysis, however, and in a sense our problem is worse. Ellul takes, as his object of study, propaganda from the Soviet Union and Nazi-Germany. His view of propaganda is one that is largely monophonic. Yes, technology still pushes information on citizens, but in 1965 it did so unidirectionally. Our challenge is different and perhaps more troubling: we are dealing with polyphonic propaganda. The techniques of propaganda are employed by a multitude of parties, and the net effect is not to produce truth – as Ellul would have it – but eliminate the conditions for truth. Truth no longer become viable in a set of mutually contradictory propaganda systems, it is reduced to mere feelings and emotions: “I feel this”. “This is my truth”. “This is the way I feel about it”.

In this case the idea that the state should speak too is radically different, because the state or any state-appointed arbiter of truth just adds to the polyphony of voices and provides them with another voice to enter into a polemic with. It fractures the debate even more, and allows for a special category of meta-propaganda that targets the way information is interpreted overall: the idea of a corridor of politically correct views that we have to exist within. Our challenge, however, is not the existence of such a corridor, but the fact that it is impossible to establish a coherent, shared model of reality and hence to decide what the facts are.

An epistemological community must rest on a fundamental cognitive contract, an idea about how we arrive at facts and the truth. It must contain mechanisms of arbitration that are institution in themselves, independent of political decision making or commercial interest. The lack of such a foundation means that no complex social cognition is possible. That in itself is devastating to a society, one could argue, and is what we need to think about.

It is no surprise that I take issue with Ellul’s assertion that technology is at the heart of the problem, but let me at least outline the argument I think Ellul would have to deal with if he was revising his book for our age. I would argue that in a globalized society, the only way we can establish that epistemological, basic foundation to build on is through technology and collaboration within new institutions. I have no doubt that the web could carry such institutions, just like it carries the Wikipedia.

There is an interesting observation about the web here, an observation that sometimes puzzles me. The web is simultaneously the most collaborative environment constructed by mankind and the most adversarial. The web and the Internet would not exist but for the protocol agreements that have emerged as its basis (this is examined and studied commendably in David Post’s excellent book Jefferson’s Moose). At the same time the web is a constant arms race around different uses of this collaboratively enabled technology.

Spam is not an aberration or anomaly, but can be seen as an instance of a generalized, platonic pattern in this space. A pattern that recurs through-out many different domains and has started to climb the semantic layers from simple commercial scams to the semiosphere of our societies, where memes compete for attention and propagation. And the question is not how to compete best, but how to continue to engage in institutional, collaborative and, yes, technological innovation to build stronger protections and counter-measures. What is to disinformation as spamfilters are to unwanted commercial emails? It is not mere spamfilters with new keywords, it needs to be something radically new and most likely institutional in the sense that it requires more than just technology.

Ellul’s book provides a fascinating take on propaganda and is required reading for anyone who wants to understand the issues we are working on. More on him soon.

Notes on attention, fake news and noise #3: The Noise Society 10 years later

This February it is 10 years since I defended my doctoral thesis on what I then called the Noise Society. The main idea was that the idea of an orderly, domesticated and controllable information society – modeled on the post-industrial visions of Bell and others – probably was wrongheaded, and that we would see a much wilder society characterized by an abundance of information and a lack of control, and in fact: we would see information grow to a point where the value of it actually collapsed as the information itself collapsed into noise. Noise, I felt then, was a good description not only of individual disturbances in the signal, but also the cost for signal discovery over all. A noise society would face very different challenges than an information society.

Copyright in a noise society would not be an instrument of encouraging the production of information so much as a tool for controlling and filtering information in different ways. Privacy would not be about controlling data about us as much as having the ability to consistently project a trusted identity. Free expression would not be about the right to express yourself, but about the right not to be drowned out by others. The design of filters would become key in many different ways.

Looking back now, I feel that I was right in some ways and wrong in many, but that the overall conclusion – that the increase in information and the consequences of this information wealth are at the heart of our challenges with technology – was not far off target. What I am missing the thesis is a better understanding of what information does. My focus on noise was a consequence of accepting that information was a “thing” rather than a process. Information looks like a noun, but is really a verb, however.

Revisiting these thoughts, I feel that the greatest mistake was not including Herbert Simon’s analysis of attention as a key concept in understanding information. If I had done that I would have been able to see that noise also is a process, and I would have been able to ask what noise does to a society, theorize that and think about how we would be able to frame arguments of policy in the light of attention scarcity. That would have been a better way to get at what I was trying to understand at the time.

But, luckily, thought is about progress and learning, and not about being right – so what I have been doing in my academic reading and writing for the last three years at least is to emphasize Herbert Simon’s work, and the importance of understanding his major finding that with a wealth of information comes a poverty of attention and a need to allocate attention efficiently.

I believe this can be generalized, and that the information wealth we are seeing is just one aspect of an increasing complexity in our societies. The generalized Simon-theorem is this: with a wealth of complexity comes a poverty of cognition and a need to learn efficiently. Simon, in his 1969 talk on this subject, notes that it is only by investing in artificial intelligence we can do this, and he says that it is obvious to him that the purpose of all of our technological endeavours is to ensure that we learn faster.

Learning, adapting to a society where our problems are an order of magnitude more complex, is key to survival for us as a species.
It follows that I think the current focus on digitization and technology is a mere distraction. What we should be doing is to re-organize our institutions and societies for learning more, and faster. This is where the theories of Hayek and others on knowledge coordination become helpful and important for us, and our ideological discussions should focus on if we are learning as a society or not. There is a wealth of unanswered questions here, such as how we measure the rate of learning, what the opposite of learning is, how we organize for learning, how technology can help and how it harms learning — questions we need to dig into and understand at a very basic level, I think.

So, looking back at my dissertation – what do I think?

I think I captured a key way in which we were wrong, and I captured a better model – but the model I was working with then was still fatally flawed. It focused on information as a thing not a process, and construed noise as gravel in the machinery. The focus on information also detracts from the real use cases and the purpose of all the technology we see around us. If we were, for once, to take our ambitions “to make the world a better place” seriously, we would have to think about what it is that makes the world better. What is the process that does that? It is not innovation as such, innovation can go both ways. The process that makes our worlds better – individually and as societies – is learning.

In one sense I guess this is just an exercise in conceptual modeling, and the question I seem to be answering is what conceptual model is best suited to understand and discuss issues of policy in the information society. That is fair, and a kind of criticism that I can live with: I believe concepts are crucially important and before we have clarified what we mean we are unable to move at all. But there is a risk here that I recognize as well, and that is that we get stuck in analysis-paralysis. What, then, are the recommendations that flow from this analysis?

The recommendations could be surprisingly concrete for the three policy areas we discussed, and I leave as an exercise for the reader to think about them. How would you change the data protection frameworks of the world if the key concern was to maximize learning? How would you change intellectual property rights? Free expression? All are interesting to explore and to solve in the light of that one goal. I tend to believe that the regulatory frameworks we end up with would be very different than the ones that we have today.

As one part of my research as an adjunct professor at the Royal Institute of Technology I hope to continue exploring this theme and others. More to come.

Notes on attention, fake news and noise #2: On the non-linear value of speech and freedom of dialogue or attention

It has become more common to denounce the idea that more speech means better democracy. Commentators, technologists and others have come out to say that they were mistaken – that their belief that enabling more people to speak would improve democracy was wrong, or at the very least simplistic. It is worth analyzing what this really means, since it is a reversal of one of the fundamental hopes the information society vision promised.

The hope was this: that technology would democratize speech and that a multitude of voices would disrupt and displace existing, incumbent hierarchies of power. If the printing press meant that access to knowledge exploded in western society, the Internet meant that the production of knowledge, views and opinions now was almost free and frictionless: anyone could become a publisher, a writer, a speaker and an opinion maker.

To a large extent this is what has happened. Anyone who wants to express themselves today can fire up their computer, comment on a social network, write a blogpost or tweet and share their words with whoever is willing to listen – and therein lies the crux. We have, historically, always focused on speech because the scarcity we fought was one of voice: it was hard to speak, to publish, to share your opinion. But the reality is that free speech or free expression just form one point in a relationship – for free speech to be worth anything someone has to listen. Free speech alone is the freedom of monologue, perhaps of the lunatic raving to the wind or the sole voice crying out in the desert. Society is founded upon something more difficult: the right to free dialogue.

You may argue that this is a false and pernicious dichotomy: the dialogue occurs when someone chooses to listen, and no-one is, today, restricted from listening to anyone, so why should we care about the listening piece of dialogue? The only part that needs to be safe-guarded is, you may say, the right to speak. All else follows.

This is where we may want to dig deeper. If you speak, can everyone listen? Do they want to? Do you have a right to be listened to? Do you have a right to be heard that corresponds to your right to speak? Is there, in fact, a duty to listen that precedes the right to speak?

We enter difficult territory here, but with the increasing volume of noise in our societies this question becomes more salient than ever before. A fair bit of that noise is in fact speech, from parties that use speech to drown out other speech. Propaganda and censorship are difficult in a society characterized by information wealth and abundance, but noise that drowns out speech is readily available: not control, but excess, flooding and silence through shouting others down – those are the threats to our age.

When Zeynep Tufekci analyzes free speech in a recent Wired article, she notes that even if it is a democratic value, it is not the only one. There are other values as well. That is right, but we could also ask if we have understood the value at play here in the right way. Tufekci’s excellent article goes on to note that there is a valuable distinction between attention and speech, and that there is no right to attention. Attention is something that needs to be freely given, and much of her article asks the legitimate question of if current technologies, platforms and business models allow for us to allocate attention freely. We could ask here if what she is saying implies that we need to examine whether there is a freedom of attention right somewhere here as well.

When someone says that the relationship between free expression the quality and robustness of a democracy is non-linear, they can be saying many different things. There is a tendency to think that what we need to accept is a balancing of free speech and free expression, and that there are other values that we are neglecting. We could, however, equally say that we have misunderstood the fundamental nature and structure of the value we are trying to protect.

Just because (and Tufekci makes this point as well) the bottle-neck used to be speech we focused there. What we really wanted was perhaps free dialogue, built on free speech and the right to freely allocate one’s attention as one sees fit. Or maybe what we wanted was the freedom to participate in democratic discourse, something that is, again, different.

Why, then, is this distinction important? Perhaps because the assumption of the constancy of the underlying value we are trying to protect, the idea that free speech is well understood and that we should just “balance” it, leads us to solution spaces where we actually harm the values we would like to protect unduly. By examining alternative legal universes where a right to dialogue, the right to free attention, the right to democratic discourse et cetera could exist we examine and start from that value rather than give up on it and enter into the language of balancing and restricting.

There is something else here that worries me, and that is that sometimes there is almost a sense that we are but victims of speech, information overload and distraction. That we have no choice, and that this choice needs to be designed, architected and prescribed for us. In its worst forms this assumption derives the need to balance speech from democratic outcomes and people’s choices. It assumes that something must be wrong with free speech because people are making choices we do not agree with, so they must be victims. They do not know what they are doing. This assumption – admittedly exaggerated here – worries me greatly, and highlights another complexity in our set of problems.

How do we know when free speech is not working? What are the indications that the quality of democracy is not increasing with the amount of speech available in a community? It cannot just be that we disagree with the choices made in that democracy, so what could we be looking for? A lack of commitment to democracy itself? A lack of respect for its institutions?
As we explore this further, and examine other possible consistent sets of rights around opinion making, speech, attention, dialogue and democratic discourse we need to start sorting these things out too.

Just how do we know that free speech has become corrosive noise and is eroding our democracy? And how much of that is technology’s fault and how much is our responsibility as citizens? That is no easy question, but it is an important one.

(Picture credit: John W. Schulze CC-attrib)