The noble and necessary lie (Fake News Notes X)

Plato’s use of the idea of a noble lie was oppressive. He wanted to tell the people a tale of their origin that would encourage them to bend and bow to the idea of a stratified society, and he suggest that this would make everyone better off — and we clearly see that today for what it was: a defense for a class society that kept a small elite at the top, not through meritocracy or election, but through narrative.

But there is another way to read this notion of  a foundational myth, and that is to read it as that “common baseline of facts” that everyone is now calling for. This “common baseline” is often left unexplained and taken for granted, but the reality is that with the amount of information and criticism and skepticism that we have today, such a baseline will need to be based on a “suspension of disbelief”, as William Davies suggests:

Public life has become like a play whose audience is unwilling to suspend disbelief. Any utterance by a public figure can be unpicked in search of its ulterior motive. As cynicism grows, even judges, the supposedly neutral upholders of the law, are publicly accused of personal bias. Once doubt descends on public life, people become increasingly dependent on their own experiences and their own beliefs about how the world really works. One effect of this is that facts no longer seem to matter (the phenomenon misleadingly dubbed “post-truth”). But the crisis of democracy and of truth are one and the same: individuals are increasingly suspicious of the “official” stories they are being told, and expect to witness things for themselves.

[…] But our relationship to information and news is now entirely different: it has become an active and critical one, that is deeply suspicious of the official line. Nowadays, everyone is engaged in spotting and rebutting propaganda of one kind or another, curating our news feeds, attacking the framing of the other side and consciously resisting manipulation. In some ways, we have become too concerned with truth, to the point where we can no longer agree on it. The very institutions that might once have brought controversies to an end are under constant fire for their compromises and biases.

The challenge here is this: if we are to arrive at a common baseline of facts, we have to accept that there will be things treated as facts that we will come to doubt and then to disregard as they turn out to be false. The value we get for that is that we will be able to start thinking together again, we will be able to resurrect the idea of a common sense.

So, maybe the problem underlying misinformation and desinformation is not that we face intentionally false information, but that we have indulged too much in a skepticism fueled by a wealth of information and a poverty of attention? We lack a mechanism for agreeing on what we will treat as true, rather than how we will agree on what is – in any more ontological sense – true.

The distinction between a common baseline of facts and a noble lie is less clear in that perspective. A worrying idea, well expressed in Mr Davies’ essay. But the conclusion is ultimately provocative, and perhaps disappointing:

The financial obstacles confronting critical, independent, investigative media are significant. If the Johnson administration takes a more sharply populist turn, the political obstacles could increase, too – Channel 4 is frequently held up as an enemy of Brexit, for example. But let us be clear that an independent, professional media is what we need to defend at the present moment, and abandon the misleading and destructive idea that – thanks to a combination of ubiquitous data capture and personal passions – the truth can be grasped directly, without anyone needing to report it.

But why would the people cede the mechanism of producing truth back to professional media? What is the incentive? Where the common baseline of facts or the noble lie will sit in the future is far from clear, but it seems unlikely that it will return to an institution that has once lost grasp of it so fully. If the truth cannot be grasped directly – if that indeed is socially dangerous and destructive – we need to think carefully about who we allow the power to curate that new noble lie (and no, it should probably not be corporations). If we do not believe that the common baseline is needed anymore, we need new ways to approach collective decision making — an intriguingly difficult task.

 

Authority currencies and rugged landscapes of truth (Fake News Notes #9)

One model for thinking about the issue of misinformation is to say that we are navigating a flat information desert, where there is no topology of truth available. Now hills of fact, no valleys of misinformation. Our challenge is to figure out a good way to add a third dimension, or more than one single dimension to the universe of news, or information.

How would one do this? There are obvious ways like importing trust from an off-line brand or other off-line institution. When we read the New York Times on the web we do so under the reflected light of that venerable institution off-line and we expect government websites to carry some of the authority government agencies do – something that might even be signalled through the use of a specific top-level domain, like .gov.

But are there new ways? New strategies that we could adopt?

One tempting, if simplistic model, is to cryptographically sign pieces of information. Just like we can build a web of trust by signing each-others signatures, we may be able to “vouch” for a piece of information or a source of information. Such a model would be open to abuse, however: it is easy to imagine sources soliciting signatures based on political loyalty rather than factual content – so that seems to be a challenge that would have to be dealt with.

Another version of this is to sign with a liability — meaning that a newspaper might sign a piece of news with a signature that essentially commits them to fully liability for the piece should it be wrong or flawed from a publicist standpoint. This notion of declared accountability would be purely economic and might work to generate layers within our information space. If we wanted too, we could ask to see only pieces that were backed up by a liability acceptance of, say, 10 million USD. The willingness to be sued or attacked over the content would then create a kind of topology of truth entirely derived from the levels of liability the publishing entity declared themselves willing to absorb.

A landscape entirely determined of liability has some obvious weaknesses – would it not be the same as saying that truth equals wealth? Well, not necessarily – it is quite possible to take a bet that will ruin you, hence allowing for smaller publishers who are really sure of their information to take on liability beyond their actual financial means. In fact, the entire model looks a little like placing bets on pieces of news or information – declaring that we are betting that it is true and are happy to take anyone on that bets that we have published something that is fake. But still, the connection with money will make people uneasy – even though, frankly, classical publicist authority is underpinned by a financial element as well. In this new model that could switch from legal entities to individuals.

That leads us on to another idea – the idea of an “authority currency”. We could imagine a world in which journalists accrued authority over time, by publishing pieces that were found to be accurate and fair reporting. The challenge, however, is the adjudication of the content. Who gets to say that a piece should generate authority currency for someone? If we say “everyone” we end up with the populist problem of political loyalty trumping factual accuracy, so we need another mechanism (although it is tempting to use Patreon payments as a strong signal in such a currency – if people are willing to pay for the content freely it has to have had some qualities). If we say “platforms” we end up with the traditional question of why we should trust platforms. If we say “publishers” they end up marking their own homework. If we say “the state” we are slightly delusional. Can we, then, imagine a new kind of institution or mechanism that could do this?

I am not sure. What I do feel is that this challenge – of moving from the flat information deserts to the rugged landscapes of truth – highlights some key difficulties in the work on misinformation.

Hannah Arendt on politics and truth – and fake news? (Notes on attention, fake news and noise #6)

Any analysis of fake news would be incomplete without a reading of Hannah Arendts magnificent essay Truth and Politics from 1967. Arendt, in this essay, examines carefully the relationship between truth and politics, and makes a few observations that remind us of why the issue of “fake news” is neither new nor uniquely digital. It is but an aspect of that greater challenge of how we reconcile truth and politics.

Arendt anchors the entire discussion solidly not only in a broader context, but she reminds us that this is a tension that has been with civilization since Socrates. “Fake news” is nothing else than yet another challenge that meets us in the gap between dialectic and rhetoric, and Socrates would be surprised and dismayed to find us thinking we had discovered a new phenomenon. The issue of truth in politics is one that has always been at the heart of our civilization and our democratic tradition.
Arendt notes this almost brutally in the beginning of her essay:

“No one has ever doubted that truth and politics are on rather bad terms with each other, and no one, as far as I know, has ever counted truthfulness among the political virtues. Lies have always been regarded as necessary and justifiable tools not only of the politician’s and the demagogue’s but also of the stateman’s trade.” (p 223)

It is interesting to think about how we read Arendt here. Today, as politics is under attack and we suffer from an increase of rhetoric and the decline of dialogue, we almost immediately become defensive. We want to say that we should not deride politics, and that politics deserve respect and that we should be careful and ensure that we do not further increase people’s loss of faith in the political system of democracy — and all of this is both correct and deeply troubling at the same time. It shows us that our faith in the robustness of the system has suffered so many blows now that we shy away from the clear-eyed realization that politics is rhetoric first and dialogue only second (and bad politics never gets to dialogue at all).

Arendt does not mean to insult our democracy, she merely recognizes a philosophical analysis that has remained constant over time. She quotes Hobbes as saying that if power depended on the sum of the angles in a triangle not being equal to the sum of two angles in a rectangle, then books of geometry would be burned by some in the streets. This is what politics is – power – and we should not expect anything else. That is why the education of our politicians is so important, and their character key. Socrates sense of urgency when he tries to educate Alcibiades is key, and any reader who read the dialogues would be aware of the price of Socrates failure in what Alcibiades became.

Arendt also makes an interesting point on the difference between what she calls rational truths – the mathematical, scientific – and the factual ones and point out that the latter are “much more vulnerable”. (p 227) And factual truth is the stuff politics are made of, she notes.

“Dominion (to speak Hobbes’ language) when it attacks rational truth oversteps, as it were, its domain while it gives battle on its own ground when it falsifies or lies away facts.” (p 227)

Facts are fair game in politics, and has always been. And Arendt then makes an observation that is key to understanding our challenges and is worth quoting in full:

“The hallmark of factual truth is that its opposite is neither error nor illusion nor opinion, not one of which reflects upon personal truthfulness, but the deliberate falsehood, or lie. Error, of course, is possible, and even common, with respect to factual truth, in which case this kind of truth is in no way different from scientific or rational truth. But the point is that with respect to facts there exists another alternative, and this alternative, the deliberate falsehood, does not belong to the same species as propositions that, whether right or mistaken, intend nor more than to say what is, or how something that is appears to me. A factual statement – Germany invaded Belgium in August 1914 – acquires political implications only by being put in an interpretative context. But the opposite proprosition, which Clemenceau, still unacquainted with the art of rewriting history, though absurd, needs no context to be of political significance. It is clearly an attempt to change the record, and as such it is a form of _action_. The same is true when the liar, lacking the power to make his falsehood stick, does not insist on the gospel truth of his statement but pretends that this is his ‘opinion’ to which he claims his constitutional right. This is frequently done by subversive groups, and in a politically immature public the resulting confusion can be considerable. The blurring of the dividing line between factual truth and opinion belongs among the many forms that lying can assume, all of which are forms of action.
While the liar is a man of action, the truthteller, whether he tells a rational or factual truth, most empathically is not.” (p 245)

Arendt is offering an analysis of our dilemma in as clear a way as can be. Lying is an action, telling the truth is most emphatically not, and the reduction of a falsehood to an opinion creates considerable confusion, to say the least. The insight that telling the truth is less powerful than lying, less of an action is potentially devastating – liars has something at stake, and truth tellers sometimes make the mistake of thinking that relaying the truth in itself is enough.

But Arendt also offers a solution and hope — and it is evident even in this rather grim quote: she speaks of a politically immature public, and as she closes the essay she takes great pains to say that these lies, these falsehoods, in no way detracts from the value of political action. In fact, she says that politics is a great endeavor and one that is worthy of our time, effort and commitment – but ultimately we also need to recognize that it is limited by truth. Our respect – as citizens – for truth is what preserves, she says, the integrity of the political realm.

As in the platonic dialogues, as in Hobbes, as everywhere in history – truth is a matter of character. Our own character, honed in dialogue and made resistant to the worst forms of rhetoric. This is not new – and it is not easy, and cannot be solved with a technical fix.

Link: https://idanlandau.files.wordpress.com/2014/12/arendt-truth-and-politics.pdf

Notes on attention, fake news and noise #5: Are We Victims of Algorithms? On Akrasia and Technology.

Are we victims of algorithms? When we click on click bait and content that is low quality – how much of the responsibility of that click is on us and how much on the provider of the content? The way we answer that question maybe connected to an ancient debate in philosophy about Akrasia or weakness of will. Why, philosophy asks, do we do things that are not good for us?

Plato’s Socrates has a rather unforgiving answer: we do those things that are not good for us because we lack knowledge. Knowledge, he argues, is virtue. If we just know what is right we will act in the right way. When we click the low quality entertainment content and waste our time it is because we do not know better. Clearly, then, the answer from a platonic standpoint is to ensure that we enlighten each-other. We need a version of digital literacy that allows us to separate the wheat from the chaff, that helps us know better.

In fact, arguably, weakness of will did not exist for Socrates (hence why he is so forbidding, perhaps) but was merely ignorance. Once you know, you will act right.

Aristotle disagreed and his view was we may hold opinions that are short term and wrong and be affected by them, and hence do things that re not good for us. This view, later developed and adumbrated by Davidson, suggests that decisions are often made without the agent considering all possible things that may have a bearing on a choice. Davidson’s definition is something like “If someone has two choices a and b does b knowing that all things considered a would be better than b, but ends up doing b that is akrasia” (not a quote, but a rendering of Davidson). Akrasia then becomes not considering the full set of facts that should inform the choice.

Having one more beer without considering the previous ones, or having one more cookie without thinking about the plate now being empty.

The kind of akrasia we see in the technological space may be more like that. We short term pleasure visavi long term gain. A classical Kahneman / Tversky challenge. How do we govern ourselves?
So, how do we solve that? Can the fight against akrasia be outsourced? Designed in to technology? It seems trivially true that it can, and this is exactly what tools like Freedom and Stayfocusd actually try to do (there are many other versions of course). These apps block of sites or the Internet for a set amount of time, and force you back to focus on what you were doing. They eliminate the distraction of the web – but they are not clearly helping you consume high quality content.

That is a distinction worth exploring.

Could we make a distinction here between access and consumption? We can help fight akrasia at the access level, but its harder to do when it comes to consumption? Like, not buying chocolate so there is none in your fridge, or simple refraining from eating the chocolate in the fridge? It seems easier to do the first – reduce access – rather than control consumption. One is a question of availability, the either of governance. A discrete versus a continuous temptation, perhaps.

It seems easy to fight discrete akrasia, but sorting out continuous akrasia seems much harder.

*

Is it desirable to try? Assume that you could download a technology that would only show you high quality content on the web. Would you then install that? A splinternet provider that offers “qualitative Internet only – no click bait or distractions”. It would not have to be permanent, you could set hours for distraction, or allocate hours to your kids. Is that an interesting product?

The first question you would ask would probably be why you should trust this particular curator. Why should you allow someone else to determine what is high quality? Well, assume that this challenge can be met by outsourcing it to a crowd, where you self-identify values and ideas of quality and you are matched with others of the same view. Assume also, while we are at it, that you can do this without the resulting filter bubble problem, for now. Would you – even under those assumptions – trust the system?

The second question would be how such a system can reflect a dynamic in which the information production rate doubles. Collective curation models need to deal with the challenge of marking an item as ok or not ok – but the largest category will be a third: not rated. A bet on collective curation is a bet on the value of the not curated always being less than the cost of possible distraction. That is an unclear bet, it seems to me.

The third question would be what sensitivity you would have to deviations. In any collectively curated system a certain percentage of the content is till going to be what you consider low quality. How much such content would you tolerate before you ditch the system? How much of content made unavailable, but considered high quality by you, would you accept? How sensitive are you to the smoothing effects of the collective curation mechanism? Both in exclusion and inclusion? I suspect we are much more sensitive than we allow for.

Any anti-akrasia technology based on curation – even collective curation – would have to deal with those issues, at least. And probably many others.

*

Maybe it is worth also thinking about what it says about our view of human nature if we believe that solutions to akrasia need to be engineered. Are we permanently flawed, or is the fight against akrasia something that actually has corona effects in us – character building effects – that we should embrace?

Building akrasia away is different from developing the self-discipline to keep it in check, is it not?

Any problem that can be rendered as an akrasia problem – and that goes, perhaps, even for issues of fake news and similar content related conundrums – needs to be examined in the light of some of these questions, I suspect.

Notes on attention, fake news and noise #4: Jacques Ellul and the rise of polyphonic propaganda part 1

Jacques Ellul is arguably one of the earlier and most consistent technology critics we have. His texts are due for a revival in a time when technology criticism is in demand, and even techno-optimists like myself would probably welcome that, because even if he is fierce and often caustic, he is interesting and thoughtful. Ellul had a lot to say about technology in books like The Technological Society and The Technological Bluff, but he also discussed the effects of technology on social information and news. In his bleak little work Propaganda: The Formation of Men’s Attitudes (New York 1965(1962)) he examines how propaganda draws on technology and how the propaganda apparatus shapes views and opinions in a society. There are many salient points in the book, and quotes that are worth debating.

That said, Ellul is not an easy read or an uncontroversial thinker. Here is how he connects propaganda and democracy, arguing that state propaganda is necessary to maintain democracy:

“I have tried to show elsewhere that propaganda has also become a necessity for the internal life of a democracy. Nowadays the State is forced to define an official truth. This is a change of extreme seriousness. Even when the State is not motivated to do this for reasons of actions or prestige, it is led to it when fulfilling its mission of disseminating information.

We have seen how the growth of information inevitably leads to the need for propaganda. This is truer in a democratic system than in any other.

The public will accept news if it is arranged in a comprehensive system, and if it does not speak only to the intelligence but to the ‘heart’. This means, precisely, that the public wants propaganda, and if the State does not wish to leave it to a party, which will provide explanations for everything (i.e. the truth), it must itself make propaganda. Thus, the democratic State, even if it does not want to, becomes a propagandist State because of trhe need to dispense information. This entails a profound constitutional and ideological transformation. It is, in effect, a State that must proclaim an official, general, and explicit truth. The State can no longer be objective or liberal, but is forced to bring to the overinformed people a corpus intelligentiae.”

Ellul says, in effect that in a noise society there is always propaganda – the question is who is behind it. It is a grim world view in which a State that yields the responsibility to engage in propaganda yields it to someone else.

Ellul comments, partly wryly, that the only way to avoid this is to allow citizens 3-4 hours to engage in becoming better citizens, and reduce the working day to 4 hours. A solution he agrees is simplistic and unrealistic, it seems, and it would require that citizens “master their passions and egotism”.

The view raised here is useful because it clearly states a view that sometime seems to be underlying the debate we are having – that there is a necessity for the State to become an arbiter of truth (or to designate one) or someone else will take that role. The weakness in this view is a weakness that plagues Ellul’s entire analysis, however, and in a sense our problem is worse. Ellul takes, as his object of study, propaganda from the Soviet Union and Nazi-Germany. His view of propaganda is one that is largely monophonic. Yes, technology still pushes information on citizens, but in 1965 it did so unidirectionally. Our challenge is different and perhaps more troubling: we are dealing with polyphonic propaganda. The techniques of propaganda are employed by a multitude of parties, and the net effect is not to produce truth – as Ellul would have it – but eliminate the conditions for truth. Truth no longer become viable in a set of mutually contradictory propaganda systems, it is reduced to mere feelings and emotions: “I feel this”. “This is my truth”. “This is the way I feel about it”.

In this case the idea that the state should speak too is radically different, because the state or any state-appointed arbiter of truth just adds to the polyphony of voices and provides them with another voice to enter into a polemic with. It fractures the debate even more, and allows for a special category of meta-propaganda that targets the way information is interpreted overall: the idea of a corridor of politically correct views that we have to exist within. Our challenge, however, is not the existence of such a corridor, but the fact that it is impossible to establish a coherent, shared model of reality and hence to decide what the facts are.

An epistemological community must rest on a fundamental cognitive contract, an idea about how we arrive at facts and the truth. It must contain mechanisms of arbitration that are institution in themselves, independent of political decision making or commercial interest. The lack of such a foundation means that no complex social cognition is possible. That in itself is devastating to a society, one could argue, and is what we need to think about.

It is no surprise that I take issue with Ellul’s assertion that technology is at the heart of the problem, but let me at least outline the argument I think Ellul would have to deal with if he was revising his book for our age. I would argue that in a globalized society, the only way we can establish that epistemological, basic foundation to build on is through technology and collaboration within new institutions. I have no doubt that the web could carry such institutions, just like it carries the Wikipedia.

There is an interesting observation about the web here, an observation that sometimes puzzles me. The web is simultaneously the most collaborative environment constructed by mankind and the most adversarial. The web and the Internet would not exist but for the protocol agreements that have emerged as its basis (this is examined and studied commendably in David Post’s excellent book Jefferson’s Moose). At the same time the web is a constant arms race around different uses of this collaboratively enabled technology.

Spam is not an aberration or anomaly, but can be seen as an instance of a generalized, platonic pattern in this space. A pattern that recurs through-out many different domains and has started to climb the semantic layers from simple commercial scams to the semiosphere of our societies, where memes compete for attention and propagation. And the question is not how to compete best, but how to continue to engage in institutional, collaborative and, yes, technological innovation to build stronger protections and counter-measures. What is to disinformation as spamfilters are to unwanted commercial emails? It is not mere spamfilters with new keywords, it needs to be something radically new and most likely institutional in the sense that it requires more than just technology.

Ellul’s book provides a fascinating take on propaganda and is required reading for anyone who wants to understand the issues we are working on. More on him soon.

Notes on attention, fake news and noise #3: The Noise Society 10 years later

This February it is 10 years since I defended my doctoral thesis on what I then called the Noise Society. The main idea was that the idea of an orderly, domesticated and controllable information society – modeled on the post-industrial visions of Bell and others – probably was wrongheaded, and that we would see a much wilder society characterized by an abundance of information and a lack of control, and in fact: we would see information grow to a point where the value of it actually collapsed as the information itself collapsed into noise. Noise, I felt then, was a good description not only of individual disturbances in the signal, but also the cost for signal discovery over all. A noise society would face very different challenges than an information society.

Copyright in a noise society would not be an instrument of encouraging the production of information so much as a tool for controlling and filtering information in different ways. Privacy would not be about controlling data about us as much as having the ability to consistently project a trusted identity. Free expression would not be about the right to express yourself, but about the right not to be drowned out by others. The design of filters would become key in many different ways.

Looking back now, I feel that I was right in some ways and wrong in many, but that the overall conclusion – that the increase in information and the consequences of this information wealth are at the heart of our challenges with technology – was not far off target. What I am missing the thesis is a better understanding of what information does. My focus on noise was a consequence of accepting that information was a “thing” rather than a process. Information looks like a noun, but is really a verb, however.

Revisiting these thoughts, I feel that the greatest mistake was not including Herbert Simon’s analysis of attention as a key concept in understanding information. If I had done that I would have been able to see that noise also is a process, and I would have been able to ask what noise does to a society, theorize that and think about how we would be able to frame arguments of policy in the light of attention scarcity. That would have been a better way to get at what I was trying to understand at the time.

But, luckily, thought is about progress and learning, and not about being right – so what I have been doing in my academic reading and writing for the last three years at least is to emphasize Herbert Simon’s work, and the importance of understanding his major finding that with a wealth of information comes a poverty of attention and a need to allocate attention efficiently.

I believe this can be generalized, and that the information wealth we are seeing is just one aspect of an increasing complexity in our societies. The generalized Simon-theorem is this: with a wealth of complexity comes a poverty of cognition and a need to learn efficiently. Simon, in his 1969 talk on this subject, notes that it is only by investing in artificial intelligence we can do this, and he says that it is obvious to him that the purpose of all of our technological endeavours is to ensure that we learn faster.

Learning, adapting to a society where our problems are an order of magnitude more complex, is key to survival for us as a species.
It follows that I think the current focus on digitization and technology is a mere distraction. What we should be doing is to re-organize our institutions and societies for learning more, and faster. This is where the theories of Hayek and others on knowledge coordination become helpful and important for us, and our ideological discussions should focus on if we are learning as a society or not. There is a wealth of unanswered questions here, such as how we measure the rate of learning, what the opposite of learning is, how we organize for learning, how technology can help and how it harms learning — questions we need to dig into and understand at a very basic level, I think.

So, looking back at my dissertation – what do I think?

I think I captured a key way in which we were wrong, and I captured a better model – but the model I was working with then was still fatally flawed. It focused on information as a thing not a process, and construed noise as gravel in the machinery. The focus on information also detracts from the real use cases and the purpose of all the technology we see around us. If we were, for once, to take our ambitions “to make the world a better place” seriously, we would have to think about what it is that makes the world better. What is the process that does that? It is not innovation as such, innovation can go both ways. The process that makes our worlds better – individually and as societies – is learning.

In one sense I guess this is just an exercise in conceptual modeling, and the question I seem to be answering is what conceptual model is best suited to understand and discuss issues of policy in the information society. That is fair, and a kind of criticism that I can live with: I believe concepts are crucially important and before we have clarified what we mean we are unable to move at all. But there is a risk here that I recognize as well, and that is that we get stuck in analysis-paralysis. What, then, are the recommendations that flow from this analysis?

The recommendations could be surprisingly concrete for the three policy areas we discussed, and I leave as an exercise for the reader to think about them. How would you change the data protection frameworks of the world if the key concern was to maximize learning? How would you change intellectual property rights? Free expression? All are interesting to explore and to solve in the light of that one goal. I tend to believe that the regulatory frameworks we end up with would be very different than the ones that we have today.

As one part of my research as an adjunct professor at the Royal Institute of Technology I hope to continue exploring this theme and others. More to come.

Notes on attention, fake news and noise #2: On the non-linear value of speech and freedom of dialogue or attention

It has become more common to denounce the idea that more speech means better democracy. Commentators, technologists and others have come out to say that they were mistaken – that their belief that enabling more people to speak would improve democracy was wrong, or at the very least simplistic. It is worth analyzing what this really means, since it is a reversal of one of the fundamental hopes the information society vision promised.

The hope was this: that technology would democratize speech and that a multitude of voices would disrupt and displace existing, incumbent hierarchies of power. If the printing press meant that access to knowledge exploded in western society, the Internet meant that the production of knowledge, views and opinions now was almost free and frictionless: anyone could become a publisher, a writer, a speaker and an opinion maker.

To a large extent this is what has happened. Anyone who wants to express themselves today can fire up their computer, comment on a social network, write a blogpost or tweet and share their words with whoever is willing to listen – and therein lies the crux. We have, historically, always focused on speech because the scarcity we fought was one of voice: it was hard to speak, to publish, to share your opinion. But the reality is that free speech or free expression just form one point in a relationship – for free speech to be worth anything someone has to listen. Free speech alone is the freedom of monologue, perhaps of the lunatic raving to the wind or the sole voice crying out in the desert. Society is founded upon something more difficult: the right to free dialogue.

You may argue that this is a false and pernicious dichotomy: the dialogue occurs when someone chooses to listen, and no-one is, today, restricted from listening to anyone, so why should we care about the listening piece of dialogue? The only part that needs to be safe-guarded is, you may say, the right to speak. All else follows.

This is where we may want to dig deeper. If you speak, can everyone listen? Do they want to? Do you have a right to be listened to? Do you have a right to be heard that corresponds to your right to speak? Is there, in fact, a duty to listen that precedes the right to speak?

We enter difficult territory here, but with the increasing volume of noise in our societies this question becomes more salient than ever before. A fair bit of that noise is in fact speech, from parties that use speech to drown out other speech. Propaganda and censorship are difficult in a society characterized by information wealth and abundance, but noise that drowns out speech is readily available: not control, but excess, flooding and silence through shouting others down – those are the threats to our age.

When Zeynep Tufekci analyzes free speech in a recent Wired article, she notes that even if it is a democratic value, it is not the only one. There are other values as well. That is right, but we could also ask if we have understood the value at play here in the right way. Tufekci’s excellent article goes on to note that there is a valuable distinction between attention and speech, and that there is no right to attention. Attention is something that needs to be freely given, and much of her article asks the legitimate question of if current technologies, platforms and business models allow for us to allocate attention freely. We could ask here if what she is saying implies that we need to examine whether there is a freedom of attention right somewhere here as well.

When someone says that the relationship between free expression the quality and robustness of a democracy is non-linear, they can be saying many different things. There is a tendency to think that what we need to accept is a balancing of free speech and free expression, and that there are other values that we are neglecting. We could, however, equally say that we have misunderstood the fundamental nature and structure of the value we are trying to protect.

Just because (and Tufekci makes this point as well) the bottle-neck used to be speech we focused there. What we really wanted was perhaps free dialogue, built on free speech and the right to freely allocate one’s attention as one sees fit. Or maybe what we wanted was the freedom to participate in democratic discourse, something that is, again, different.

Why, then, is this distinction important? Perhaps because the assumption of the constancy of the underlying value we are trying to protect, the idea that free speech is well understood and that we should just “balance” it, leads us to solution spaces where we actually harm the values we would like to protect unduly. By examining alternative legal universes where a right to dialogue, the right to free attention, the right to democratic discourse et cetera could exist we examine and start from that value rather than give up on it and enter into the language of balancing and restricting.

There is something else here that worries me, and that is that sometimes there is almost a sense that we are but victims of speech, information overload and distraction. That we have no choice, and that this choice needs to be designed, architected and prescribed for us. In its worst forms this assumption derives the need to balance speech from democratic outcomes and people’s choices. It assumes that something must be wrong with free speech because people are making choices we do not agree with, so they must be victims. They do not know what they are doing. This assumption – admittedly exaggerated here – worries me greatly, and highlights another complexity in our set of problems.

How do we know when free speech is not working? What are the indications that the quality of democracy is not increasing with the amount of speech available in a community? It cannot just be that we disagree with the choices made in that democracy, so what could we be looking for? A lack of commitment to democracy itself? A lack of respect for its institutions?
As we explore this further, and examine other possible consistent sets of rights around opinion making, speech, attention, dialogue and democratic discourse we need to start sorting these things out too.

Just how do we know that free speech has become corrosive noise and is eroding our democracy? And how much of that is technology’s fault and how much is our responsibility as citizens? That is no easy question, but it is an important one.

(Picture credit: John W. Schulze CC-attrib)