Models of speech (Fake News Notes XI)

One thing that has been occupying me recently is the question of what speech is for. In some senses this is a heretical question – many would probably argue that speech is an inalienable right, and so it really does not have to be for anything at all. I find that unconvincing, especially in a reality where we need to balance speech against a number of other rights. I also find it helpful to think through different mental models of speech in order to really figure out how they come into conflict with each-other.

Let me offer two examples of such models and what function they have speech serving – they are, admittedly, simplified, but they tell an interesting story that can be used to understand and explore part of the pressure that free expression and speech is under right now.

The first model is one in which the primary purpose of speech is discovery. It is through speech we find and develop different ideas in everything from art to science and politics. The mental model I have in mind here is a model of “the marketplace of ideas”. Here the discovery and competition between ideas is the key function of speech.

The second model is one in which speech is the means through which we deliberate in a democracy. It is how we solve problems, rather than how we discover new ideas. The mental model I have in mind here is Habermas’ public sphere. Here speech is collaborative and seeks solutions from commonly agreed facts.

So we end up with, in a broad strokes, coarse grained kind of way, these two different functions: discovery and deliberation.

Now, as we turn to the Internet and ask how it changes things, we can see that it really increases discovery by an order of magnitude – but that it so far seems to have done little (outside of the IETF) to increase our ability to deliberate. If we now generalise a little bit and argue that Europeans think of speech as deliberative and Americans think of speech as discovery, we see a major fault line open up between those different perspectives.

This is not a new insight. One of the most interesting renditions of this is something we have touched on before – Simone Weil’s notion of two spheres of speech. In the first sphere anything would allowed and absolutely no limitations allowed. In the second sphere you would be held accountable for the opinions you really intended to advance as your own. Weil argued that there was a clear, and meaningful, difference between what one says and what one means.

The challenge we have is that while technology has augmented our ability to say things, it has not augmented our ability to mean them. The information landscape is still surprisingly flat, and no particular rugged landscapes seem to be available for those who would welcome a difference between the two modes of speech. But that should not be impossible to overcome – in fact, one surprising option that this line of argument seems to suggest is that we should look to technical innovation to see how we can create much more rugged information landscapes, with clear distinctions between what you say and what you mean.

*

The other mental model that is interesting to examine more closely is the atomic model of speech, in which speech is considered mostly as a set of individual propositions or statements. The question of how to delineate the rights of speech then becomes a question of adjudicating different statements and determine which ones should be deemed legal and which ones should be deemed illegal, or with a more fine-grained resolution – which ones should be legal, which ones should be removed out of moral concerns and which ones can remain.

The atom of speech in this model is the statement or the individual piece of speech. This propositional model of speech has, historically, been the logical way to approach speech, but with the Internet there seems to be an alternative and complimentary model of speech that is based on patterns of speech rather than individual pieces. We have seen this emerge as a core individual concern in a few cases, and then mostly to identify speakers who through a pattern of speech have ended up being undesirable on a platform or in a medium. But patterns of speech should concern us even more than they do today.

Historically we have only been concerned with patterns of speech when we have studied propaganda. Propaganda is a broad-based pattern of speech where all speech is controlled by a single actor, and the resulting pattern is deeply corrosive, even if individual pieces of speech may still be fine and legitimate. In propaganda we care also about that which is being suppressed as well as what is being fabricated. And, in addition to that, we care about the dominating narratives that are being told because they create background against which all other statements are interpreted. Propaganda, Jacques Ellul teaches us, always comes from a single center.

But the net provides a challenge here. The Internet makes possible a weird kind of poly-centric propaganda that originates in many different places, and this in itself lends the pattern credibility and power. The most obvious example of this is the pattern of doubt that increasingly is eroding our common baseline of facts. This pattern is problematic because it contains no single statement that is violative, but ity opens up our common shared baseline of facts to completely costless doubt. That doubt has become both cheap to produce and distribute is a key problem that precedes that of misinformation.

The models we find standing against each-other here can be called the propositional model of speech and the pattern model of speech. Both ask hard questions, but in the second model the question is less about which statements should be judged to be legal or moral, and more about what effects we need to look out for in order to be able to understand the sum total effect of the way speech affects us.

Maybe one reason we focus on the first model is that it is simpler; it is easier to debate and discuss if something should be taken down based on qualities inherent in that piece of content, than to debate if there are patterns of speech that we need to worry about and counter act.

Now, again, coming back to the price of doubt I think we can say that the price of doubt is cheap, because we operate in an entirely flat information landscape where doubt is equally cheap for all statements. There is no one imposing a cost on you for doubting that we have been to the moon, that vaccines work or any other thing that used to be fairly well established.

You are not even censured by your peers for this behaviour anymore, because we have, oddly, come to think of doubt as a virtue in the guise of “openness”. Now, what I am saying is not that doubt is dangerous or wrong (cue the accusations about a medieval view of knowledge), but that when the pendulum swings the other way and everything is open to costless doubt, we lose something important that binds us together.

Patterns of speech – perhaps even a weaker version, such as tone of voice, – remain interesting and open areas to look at more closely as we try to assess the functions of speech in society.

*

One last model is worthwhile looking closer at, and that is the model of speech as a monologic activity. When we speak about speech we rarely speak about listeners. There are several different possibilities here to think carefully about the dialogic nature of speech, as this makes speech into a n-person game, rather than a monologic act of speaking.

As we do that we find that different pieces of speech may impact and benefit different groups differently. If we conceive of speech as an n-person game we can, for example, see that anti-terrorist researchers benefit from pieces of speech that let the study terrorist groups closer, that vulnerable people who have been radicalised in different ways may suffer from exposure to that same piece of speech and that politicians may gain in stature and importance from opposing that same piece off speech.

The pieces of speech we study become more like moves on a chess board with several different players. A certain speech act may threaten one player, weaken another and benefit a third. If we include counter speech in our model, we find that we are sketching out the early stages of speech as a game that can be played.

This opens up for interesting ideas, such as can we find an optimisation criterion for speech and perhaps build a joint game with recommendation algorithms, moderator functions and different consumer software speech and play that game a million times to find strategies for moderating and recommending content that fulfil that optimisation criterion?

Now, then, what would that criterion be? If we wanted to let an AI play the Game of Speech – what would we ask that it optimise? How would we keep score? That is an intriguing question, and it is easy to see that there are different options: we could optimise for variance in the resulting speech our for agreement or for solving any specific class of problems or for learning (as measured by accruing new topics and discussing new things?).

Speech as Game is an intriguing model that would take some flushing out to be more than an interesting speculative thought experiment – but it could be worth a try.

The noble and necessary lie (Fake News Notes X)

Plato’s use of the idea of a noble lie was oppressive. He wanted to tell the people a tale of their origin that would encourage them to bend and bow to the idea of a stratified society, and he suggest that this would make everyone better off — and we clearly see that today for what it was: a defense for a class society that kept a small elite at the top, not through meritocracy or election, but through narrative.

But there is another way to read this notion of  a foundational myth, and that is to read it as that “common baseline of facts” that everyone is now calling for. This “common baseline” is often left unexplained and taken for granted, but the reality is that with the amount of information and criticism and skepticism that we have today, such a baseline will need to be based on a “suspension of disbelief”, as William Davies suggests:

Public life has become like a play whose audience is unwilling to suspend disbelief. Any utterance by a public figure can be unpicked in search of its ulterior motive. As cynicism grows, even judges, the supposedly neutral upholders of the law, are publicly accused of personal bias. Once doubt descends on public life, people become increasingly dependent on their own experiences and their own beliefs about how the world really works. One effect of this is that facts no longer seem to matter (the phenomenon misleadingly dubbed “post-truth”). But the crisis of democracy and of truth are one and the same: individuals are increasingly suspicious of the “official” stories they are being told, and expect to witness things for themselves.

[…] But our relationship to information and news is now entirely different: it has become an active and critical one, that is deeply suspicious of the official line. Nowadays, everyone is engaged in spotting and rebutting propaganda of one kind or another, curating our news feeds, attacking the framing of the other side and consciously resisting manipulation. In some ways, we have become too concerned with truth, to the point where we can no longer agree on it. The very institutions that might once have brought controversies to an end are under constant fire for their compromises and biases.

The challenge here is this: if we are to arrive at a common baseline of facts, we have to accept that there will be things treated as facts that we will come to doubt and then to disregard as they turn out to be false. The value we get for that is that we will be able to start thinking together again, we will be able to resurrect the idea of a common sense.

So, maybe the problem underlying misinformation and desinformation is not that we face intentionally false information, but that we have indulged too much in a skepticism fueled by a wealth of information and a poverty of attention? We lack a mechanism for agreeing on what we will treat as true, rather than how we will agree on what is – in any more ontological sense – true.

The distinction between a common baseline of facts and a noble lie is less clear in that perspective. A worrying idea, well expressed in Mr Davies’ essay. But the conclusion is ultimately provocative, and perhaps disappointing:

The financial obstacles confronting critical, independent, investigative media are significant. If the Johnson administration takes a more sharply populist turn, the political obstacles could increase, too – Channel 4 is frequently held up as an enemy of Brexit, for example. But let us be clear that an independent, professional media is what we need to defend at the present moment, and abandon the misleading and destructive idea that – thanks to a combination of ubiquitous data capture and personal passions – the truth can be grasped directly, without anyone needing to report it.

But why would the people cede the mechanism of producing truth back to professional media? What is the incentive? Where the common baseline of facts or the noble lie will sit in the future is far from clear, but it seems unlikely that it will return to an institution that has once lost grasp of it so fully. If the truth cannot be grasped directly – if that indeed is socially dangerous and destructive – we need to think carefully about who we allow the power to curate that new noble lie (and no, it should probably not be corporations). If we do not believe that the common baseline is needed anymore, we need new ways to approach collective decision making — an intriguingly difficult task.

 

Authority currencies and rugged landscapes of truth (Fake News Notes #9)

One model for thinking about the issue of misinformation is to say that we are navigating a flat information desert, where there is no topology of truth available. Now hills of fact, no valleys of misinformation. Our challenge is to figure out a good way to add a third dimension, or more than one single dimension to the universe of news, or information.

How would one do this? There are obvious ways like importing trust from an off-line brand or other off-line institution. When we read the New York Times on the web we do so under the reflected light of that venerable institution off-line and we expect government websites to carry some of the authority government agencies do – something that might even be signalled through the use of a specific top-level domain, like .gov.

But are there new ways? New strategies that we could adopt?

One tempting, if simplistic model, is to cryptographically sign pieces of information. Just like we can build a web of trust by signing each-others signatures, we may be able to “vouch” for a piece of information or a source of information. Such a model would be open to abuse, however: it is easy to imagine sources soliciting signatures based on political loyalty rather than factual content – so that seems to be a challenge that would have to be dealt with.

Another version of this is to sign with a liability — meaning that a newspaper might sign a piece of news with a signature that essentially commits them to fully liability for the piece should it be wrong or flawed from a publicist standpoint. This notion of declared accountability would be purely economic and might work to generate layers within our information space. If we wanted too, we could ask to see only pieces that were backed up by a liability acceptance of, say, 10 million USD. The willingness to be sued or attacked over the content would then create a kind of topology of truth entirely derived from the levels of liability the publishing entity declared themselves willing to absorb.

A landscape entirely determined of liability has some obvious weaknesses – would it not be the same as saying that truth equals wealth? Well, not necessarily – it is quite possible to take a bet that will ruin you, hence allowing for smaller publishers who are really sure of their information to take on liability beyond their actual financial means. In fact, the entire model looks a little like placing bets on pieces of news or information – declaring that we are betting that it is true and are happy to take anyone on that bets that we have published something that is fake. But still, the connection with money will make people uneasy – even though, frankly, classical publicist authority is underpinned by a financial element as well. In this new model that could switch from legal entities to individuals.

That leads us on to another idea – the idea of an “authority currency”. We could imagine a world in which journalists accrued authority over time, by publishing pieces that were found to be accurate and fair reporting. The challenge, however, is the adjudication of the content. Who gets to say that a piece should generate authority currency for someone? If we say “everyone” we end up with the populist problem of political loyalty trumping factual accuracy, so we need another mechanism (although it is tempting to use Patreon payments as a strong signal in such a currency – if people are willing to pay for the content freely it has to have had some qualities). If we say “platforms” we end up with the traditional question of why we should trust platforms. If we say “publishers” they end up marking their own homework. If we say “the state” we are slightly delusional. Can we, then, imagine a new kind of institution or mechanism that could do this?

I am not sure. What I do feel is that this challenge – of moving from the flat information deserts to the rugged landscapes of truth – highlights some key difficulties in the work on misinformation.

Weil’s paradox: intention and speech (Fake News Notes #8)

Simone Weil, in her curious book Need for Roots, notes the following on the necessity for freedom of opinion:

[…] it would be desirable to create an absolutely free reserve in the field of publication, but in such a way as for it to be understood that the works found therein did not pledge their authors in any way and contained no direct advice for readers. There it would be possible to find, set out in their full force, all the arguments in favour of bad causes. It would be an excellent and salutary thing for them to be so displayed. Anybody could there sing the praises of what he most condemns. It would be publicly recognized that the object of such works was not to define their authors’ attitudes vis-à-vis the problems of life, but to contribute, by preliminary researches, towards a complete and correct tabulation of data concerning each problem. The law would see to it that their publication did not involve any risk of whatever kind for the author.

Simone Weil, Need for Roots, p. 22

She is imagining here a sphere where anything can be said, any view expressed and explored, all data examined — and it is interesting that she mentions data, because she is aware that a part of the challenge is not just what is said, but what data is collected and shared on social problems. But she also recognizes that such a complete free space needs to be distinguished from the public sphere of persuasion and debate:

On the other hand, publications destined to influence what is called opinion, that is to say, in effect, the conduct of life, constitute acts and ought to be subjected to the same restrictions as are all acts. In other words, they should not cause unlawful harm of any kind to any human being, and above all, should never contain any denial, explicit or implicit, of the eternal obligations towards the human being, once these obligations have been solemnly recognized by law.

Simone Weil, Need for Roots, ibid.

This category – “publications destined to influence what is called opinion”, she wants to treat differently. Here she wants the full machinery of not just law, but also morals, to apply. Then she notes, wryly one thinks, that this will present some legal challenges:

The distinction between the two fields, the one which is outside action and the one which forms part of action, is impossible to express on paper in juridical terminology. But that doesn’t prevent it from being a perfectly clear one.

Simone Weil, Need For Roots, ibid.

This captures in a way the challenge that face platforms today. The inability to express this legally is acutely felt by most that study the area, and Weil’s articulation of the two competing interests – free thought and human responsibility – is clean and clear.

Now, the question is: can we find any other way to express this than in law? Are there technologies that could help us here? We could imagine several models.

One would be to develop a domain for the public sphere, for speech that intends to influence. To develop an “on the record”-mode for the flat information surfaces of the web. You could do this trivially by signing your statement in different ways, and statements could be signed by several different people as well – the ability to support a statement in a personal way is inherent in the often cited disclaimers on Twitter — where we are always told that RT does not equal endorsement. But the really interesting question is how we do endorse something, and if we can endorse statements and beliefs with different force.

Imagine a web where we could choose not just to publish, but publish irrevocably (this is for sure connected with discussions around blockchain) and publish with the strength of not just one individual, but several. Imagine the idea that we could replicate editorial accountability not just in law, but by availing those that seek it of a mode of publishing, a technological way of asserting their accountability. That would allow us to take Weil’s clear distinction and turn it into a real one.

It would require, of course, that we accept that there is a lot of “speech” – if we use that as the generic term for the first category of opinion that Weil explores – we disagree with. But we would be able to hold those that utter “opinions” – the second category, speech intended to influence and change minds – accountable.

One solution to the issue of misinformation or disagreeable information or speech is to add dimensionality to the flat information surfaces we are interacting with today.

Real and unreal news (Notes on attention, fake news and noise #7)

What is the opposite of fake news? Is it real news? What, then, would that mean? It seems important to ask that question, since our fight against fake news also needs to be a fight _for_ something. But this quickly becomes an uncomfortable discussion, as evidenced by how people attack the question. When we discuss what the opposite of fake news is we often end up defending facts – and we inevitably end up quoting senator Moynihan, smugly saying that everyone has a right to their opinions, but not to their facts. This is naturally right, but it ducks the key question of what a fact is, and if it can exist on its own.

Let’s offer an alternative view that is more problematic. In this view we argue that facts can only exist in relationship to each-other. They are intrinsically connected in a web of knowledge and probability, and this web exists in a set of ontological premises that we call reality. Fake news – we could then argue – can exist only because we have lost our sense of a shared reality.

We hint at this when we speak of “a baseline of facts” or similar phrases (this phrase was how Obama referred to the challenge when interviewed by David Letterman recently), but we stop shy off admitting that we ultimately are caught up in a discussion about fractured reality. Our inability to share a reality creates the cracks, the fissures and fragments in which truth disappears.

This view has more troubling implications, and immediately should lead us to also question the term “fake news”, since the implication is clear – something can only be fake if there exists a reality against we can share it. The reason the term “fake news” is almost universally shunned by experts and people analyzing the issue is exactly this: it is used by different people to attack what they don’t like. We see leaders labeling news sources as “fake news” as a way to demarcate against a way to render the world that they reject. So “fake” comes to mean “wrong”.

Here is a key to the challenge we are facing. If we see this clearly – that what we are struggling with is not fake vs real news, but right vs wrong news, we also realize that there are no good solutions for the general problem of what is happening with our public discourse today. What we can find are narrow solutions for specific problems that are well-described (such as actions against deliberately misleading information from parties that deliberately mis-represent themselves), but the general challenge is quite different and much more troubling.

We suffer from a lack of shared reality.

This is interesting from a research standpoint, because it forces to ask the question of how a society constitutes a reality, and how it loses it. Such an investigation would need to touch on things like reality TV, the commodification of journalism (a la Adorno’s view of music – it seems clear that journalism has lost its liturgy). One would need to dig into and understand how truth has splintered and think hard about how our coherence theories of truth allow for this splintering.

It is worthwhile to pause on that point a little: when we understand the truth of a proposition to be its coherence with a system of other propositions, and not correspondence with an underlying ontologically more fundamental level, we open up for several different truths as long as you can imagine a set of coherent systems of propositions built on a few basic propositions – the baseline. What we have discovered in the information society is that the natural size of this necessary baseline is much smaller than we thought. The set of propositions we need to create alternate realities but not seem entirely insane is much smaller than we may have believed. And the cost for creating an alternate reality is sinking as you get more and more access to information as well as the creativity of others engaged in the same enterprise.

There is a risk that we underestimate the collaborative nature of the alternative realities that are crafted around us, the way they are the result of a collective creative effort. Just as we have seen the rise of massive open online courses in education, we have seen the rise of what we could call the massive open online conspiracy theories. They are powered by, and partly created in the same way — with the massive open online role playing games in a nice and interesting middle position. In a sense the unleashed creativity of our collaborative storytelling is what is fracturing reality – our narrative capacity has exploded the last decades.

So back to our question. The dichotomy we are looking at here is not one between fake and real news, or right and wrong news (although we do treat it that way sometimes). It is in a sense a difference between real and unreal news, but with a plurality of unrealities that we struggle to tell apart. There is no Archimedes’ point that allows us to lift the real from the fake, not bedrock foundation, as reality itself has been slowly disassembled over the last couple of decades.

A much more difficult question, then, becomes if we believe that we want a shared reality, or if we ever had one? It is a recurring theme in songs, literature and poetry – the shaky nature of our reality – and the courage needed to face it. In the remarkable song “Right Where It Belongs” this is well expressed by Nine Inch Nails (and remarkably rendered in this remix (we remix reality all the time)):

See the animal in his cage that you built
Are you sure what side you’re on?
Better not look him too closely in the eye
Are you sure what side of the glass you are on?
See the safety of the life you have built
Everything where it belongs
Feel the hollowness inside of your heart
And it’s all right where it belongs

What if everything around you
Isn’t quite as it seems?
What if all the world you think you know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself find yourself afraid to see?

What if all the world’s inside of your head?
Just creations of your own
Your devils and your gods all the living and the dead
And you really oughta know
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the ones
Are you hiding in the trees?

What if everything around you
Isn’t quite as it seems?
What if all the world you used to know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself, find yourself afraid to see?

The central insight in this is one that underlies all of our discussions around information, propaganda, disinformation and misinformation, and that is the role of our identity. We exist – as facts – within the realities we dare to accept and ultimately our flight into alternate realities and shadow worlds is an expression of our relationship to ourselves.

Hannah Arendt on politics and truth – and fake news? (Notes on attention, fake news and noise #6)

Any analysis of fake news would be incomplete without a reading of Hannah Arendts magnificent essay Truth and Politics from 1967. Arendt, in this essay, examines carefully the relationship between truth and politics, and makes a few observations that remind us of why the issue of “fake news” is neither new nor uniquely digital. It is but an aspect of that greater challenge of how we reconcile truth and politics.

Arendt anchors the entire discussion solidly not only in a broader context, but she reminds us that this is a tension that has been with civilization since Socrates. “Fake news” is nothing else than yet another challenge that meets us in the gap between dialectic and rhetoric, and Socrates would be surprised and dismayed to find us thinking we had discovered a new phenomenon. The issue of truth in politics is one that has always been at the heart of our civilization and our democratic tradition.
Arendt notes this almost brutally in the beginning of her essay:

“No one has ever doubted that truth and politics are on rather bad terms with each other, and no one, as far as I know, has ever counted truthfulness among the political virtues. Lies have always been regarded as necessary and justifiable tools not only of the politician’s and the demagogue’s but also of the stateman’s trade.” (p 223)

It is interesting to think about how we read Arendt here. Today, as politics is under attack and we suffer from an increase of rhetoric and the decline of dialogue, we almost immediately become defensive. We want to say that we should not deride politics, and that politics deserve respect and that we should be careful and ensure that we do not further increase people’s loss of faith in the political system of democracy — and all of this is both correct and deeply troubling at the same time. It shows us that our faith in the robustness of the system has suffered so many blows now that we shy away from the clear-eyed realization that politics is rhetoric first and dialogue only second (and bad politics never gets to dialogue at all).

Arendt does not mean to insult our democracy, she merely recognizes a philosophical analysis that has remained constant over time. She quotes Hobbes as saying that if power depended on the sum of the angles in a triangle not being equal to the sum of two angles in a rectangle, then books of geometry would be burned by some in the streets. This is what politics is – power – and we should not expect anything else. That is why the education of our politicians is so important, and their character key. Socrates sense of urgency when he tries to educate Alcibiades is key, and any reader who read the dialogues would be aware of the price of Socrates failure in what Alcibiades became.

Arendt also makes an interesting point on the difference between what she calls rational truths – the mathematical, scientific – and the factual ones and point out that the latter are “much more vulnerable”. (p 227) And factual truth is the stuff politics are made of, she notes.

“Dominion (to speak Hobbes’ language) when it attacks rational truth oversteps, as it were, its domain while it gives battle on its own ground when it falsifies or lies away facts.” (p 227)

Facts are fair game in politics, and has always been. And Arendt then makes an observation that is key to understanding our challenges and is worth quoting in full:

“The hallmark of factual truth is that its opposite is neither error nor illusion nor opinion, not one of which reflects upon personal truthfulness, but the deliberate falsehood, or lie. Error, of course, is possible, and even common, with respect to factual truth, in which case this kind of truth is in no way different from scientific or rational truth. But the point is that with respect to facts there exists another alternative, and this alternative, the deliberate falsehood, does not belong to the same species as propositions that, whether right or mistaken, intend nor more than to say what is, or how something that is appears to me. A factual statement – Germany invaded Belgium in August 1914 – acquires political implications only by being put in an interpretative context. But the opposite proprosition, which Clemenceau, still unacquainted with the art of rewriting history, though absurd, needs no context to be of political significance. It is clearly an attempt to change the record, and as such it is a form of _action_. The same is true when the liar, lacking the power to make his falsehood stick, does not insist on the gospel truth of his statement but pretends that this is his ‘opinion’ to which he claims his constitutional right. This is frequently done by subversive groups, and in a politically immature public the resulting confusion can be considerable. The blurring of the dividing line between factual truth and opinion belongs among the many forms that lying can assume, all of which are forms of action.
While the liar is a man of action, the truthteller, whether he tells a rational or factual truth, most empathically is not.” (p 245)

Arendt is offering an analysis of our dilemma in as clear a way as can be. Lying is an action, telling the truth is most emphatically not, and the reduction of a falsehood to an opinion creates considerable confusion, to say the least. The insight that telling the truth is less powerful than lying, less of an action is potentially devastating – liars has something at stake, and truth tellers sometimes make the mistake of thinking that relaying the truth in itself is enough.

But Arendt also offers a solution and hope — and it is evident even in this rather grim quote: she speaks of a politically immature public, and as she closes the essay she takes great pains to say that these lies, these falsehoods, in no way detracts from the value of political action. In fact, she says that politics is a great endeavor and one that is worthy of our time, effort and commitment – but ultimately we also need to recognize that it is limited by truth. Our respect – as citizens – for truth is what preserves, she says, the integrity of the political realm.

As in the platonic dialogues, as in Hobbes, as everywhere in history – truth is a matter of character. Our own character, honed in dialogue and made resistant to the worst forms of rhetoric. This is not new – and it is not easy, and cannot be solved with a technical fix.

Link: https://idanlandau.files.wordpress.com/2014/12/arendt-truth-and-politics.pdf

Notes on attention, fake news and noise #5: Are We Victims of Algorithms? On Akrasia and Technology.

Are we victims of algorithms? When we click on click bait and content that is low quality – how much of the responsibility of that click is on us and how much on the provider of the content? The way we answer that question maybe connected to an ancient debate in philosophy about Akrasia or weakness of will. Why, philosophy asks, do we do things that are not good for us?

Plato’s Socrates has a rather unforgiving answer: we do those things that are not good for us because we lack knowledge. Knowledge, he argues, is virtue. If we just know what is right we will act in the right way. When we click the low quality entertainment content and waste our time it is because we do not know better. Clearly, then, the answer from a platonic standpoint is to ensure that we enlighten each-other. We need a version of digital literacy that allows us to separate the wheat from the chaff, that helps us know better.

In fact, arguably, weakness of will did not exist for Socrates (hence why he is so forbidding, perhaps) but was merely ignorance. Once you know, you will act right.

Aristotle disagreed and his view was we may hold opinions that are short term and wrong and be affected by them, and hence do things that re not good for us. This view, later developed and adumbrated by Davidson, suggests that decisions are often made without the agent considering all possible things that may have a bearing on a choice. Davidson’s definition is something like “If someone has two choices a and b does b knowing that all things considered a would be better than b, but ends up doing b that is akrasia” (not a quote, but a rendering of Davidson). Akrasia then becomes not considering the full set of facts that should inform the choice.

Having one more beer without considering the previous ones, or having one more cookie without thinking about the plate now being empty.

The kind of akrasia we see in the technological space may be more like that. We short term pleasure visavi long term gain. A classical Kahneman / Tversky challenge. How do we govern ourselves?
So, how do we solve that? Can the fight against akrasia be outsourced? Designed in to technology? It seems trivially true that it can, and this is exactly what tools like Freedom and Stayfocusd actually try to do (there are many other versions of course). These apps block of sites or the Internet for a set amount of time, and force you back to focus on what you were doing. They eliminate the distraction of the web – but they are not clearly helping you consume high quality content.

That is a distinction worth exploring.

Could we make a distinction here between access and consumption? We can help fight akrasia at the access level, but its harder to do when it comes to consumption? Like, not buying chocolate so there is none in your fridge, or simple refraining from eating the chocolate in the fridge? It seems easier to do the first – reduce access – rather than control consumption. One is a question of availability, the either of governance. A discrete versus a continuous temptation, perhaps.

It seems easy to fight discrete akrasia, but sorting out continuous akrasia seems much harder.

*

Is it desirable to try? Assume that you could download a technology that would only show you high quality content on the web. Would you then install that? A splinternet provider that offers “qualitative Internet only – no click bait or distractions”. It would not have to be permanent, you could set hours for distraction, or allocate hours to your kids. Is that an interesting product?

The first question you would ask would probably be why you should trust this particular curator. Why should you allow someone else to determine what is high quality? Well, assume that this challenge can be met by outsourcing it to a crowd, where you self-identify values and ideas of quality and you are matched with others of the same view. Assume also, while we are at it, that you can do this without the resulting filter bubble problem, for now. Would you – even under those assumptions – trust the system?

The second question would be how such a system can reflect a dynamic in which the information production rate doubles. Collective curation models need to deal with the challenge of marking an item as ok or not ok – but the largest category will be a third: not rated. A bet on collective curation is a bet on the value of the not curated always being less than the cost of possible distraction. That is an unclear bet, it seems to me.

The third question would be what sensitivity you would have to deviations. In any collectively curated system a certain percentage of the content is till going to be what you consider low quality. How much such content would you tolerate before you ditch the system? How much of content made unavailable, but considered high quality by you, would you accept? How sensitive are you to the smoothing effects of the collective curation mechanism? Both in exclusion and inclusion? I suspect we are much more sensitive than we allow for.

Any anti-akrasia technology based on curation – even collective curation – would have to deal with those issues, at least. And probably many others.

*

Maybe it is worth also thinking about what it says about our view of human nature if we believe that solutions to akrasia need to be engineered. Are we permanently flawed, or is the fight against akrasia something that actually has corona effects in us – character building effects – that we should embrace?

Building akrasia away is different from developing the self-discipline to keep it in check, is it not?

Any problem that can be rendered as an akrasia problem – and that goes, perhaps, even for issues of fake news and similar content related conundrums – needs to be examined in the light of some of these questions, I suspect.