Real and unreal news (Notes on attention, fake news and noise #7)

What is the opposite of fake news? Is it real news? What, then, would that mean? It seems important to ask that question, since our fight against fake news also needs to be a fight _for_ something. But this quickly becomes an uncomfortable discussion, as evidenced by how people attack the question. When we discuss what the opposite of fake news is we often end up defending facts – and we inevitably end up quoting senator Moynihan, smugly saying that everyone has a right to their opinions, but not to their facts. This is naturally right, but it ducks the key question of what a fact is, and if it can exist on its own.

Let’s offer an alternative view that is more problematic. In this view we argue that facts can only exist in relationship to each-other. They are intrinsically connected in a web of knowledge and probability, and this web exists in a set of ontological premises that we call reality. Fake news – we could then argue – can exist only because we have lost our sense of a shared reality.

We hint at this when we speak of “a baseline of facts” or similar phrases (this phrase was how Obama referred to the challenge when interviewed by David Letterman recently), but we stop shy off admitting that we ultimately are caught up in a discussion about fractured reality. Our inability to share a reality creates the cracks, the fissures and fragments in which truth disappears.

This view has more troubling implications, and immediately should lead us to also question the term “fake news”, since the implication is clear – something can only be fake if there exists a reality against we can share it. The reason the term “fake news” is almost universally shunned by experts and people analyzing the issue is exactly this: it is used by different people to attack what they don’t like. We see leaders labeling news sources as “fake news” as a way to demarcate against a way to render the world that they reject. So “fake” comes to mean “wrong”.

Here is a key to the challenge we are facing. If we see this clearly – that what we are struggling with is not fake vs real news, but right vs wrong news, we also realize that there are no good solutions for the general problem of what is happening with our public discourse today. What we can find are narrow solutions for specific problems that are well-described (such as actions against deliberately misleading information from parties that deliberately mis-represent themselves), but the general challenge is quite different and much more troubling.

We suffer from a lack of shared reality.

This is interesting from a research standpoint, because it forces to ask the question of how a society constitutes a reality, and how it loses it. Such an investigation would need to touch on things like reality TV, the commodification of journalism (a la Adorno’s view of music – it seems clear that journalism has lost its liturgy). One would need to dig into and understand how truth has splintered and think hard about how our coherence theories of truth allow for this splintering.

It is worthwhile to pause on that point a little: when we understand the truth of a proposition to be its coherence with a system of other propositions, and not correspondence with an underlying ontologically more fundamental level, we open up for several different truths as long as you can imagine a set of coherent systems of propositions built on a few basic propositions – the baseline. What we have discovered in the information society is that the natural size of this necessary baseline is much smaller than we thought. The set of propositions we need to create alternate realities but not seem entirely insane is much smaller than we may have believed. And the cost for creating an alternate reality is sinking as you get more and more access to information as well as the creativity of others engaged in the same enterprise.

There is a risk that we underestimate the collaborative nature of the alternative realities that are crafted around us, the way they are the result of a collective creative effort. Just as we have seen the rise of massive open online courses in education, we have seen the rise of what we could call the massive open online conspiracy theories. They are powered by, and partly created in the same way — with the massive open online role playing games in a nice and interesting middle position. In a sense the unleashed creativity of our collaborative storytelling is what is fracturing reality – our narrative capacity has exploded the last decades.

So back to our question. The dichotomy we are looking at here is not one between fake and real news, or right and wrong news (although we do treat it that way sometimes). It is in a sense a difference between real and unreal news, but with a plurality of unrealities that we struggle to tell apart. There is no Archimedes’ point that allows us to lift the real from the fake, not bedrock foundation, as reality itself has been slowly disassembled over the last couple of decades.

A much more difficult question, then, becomes if we believe that we want a shared reality, or if we ever had one? It is a recurring theme in songs, literature and poetry – the shaky nature of our reality – and the courage needed to face it. In the remarkable song “Right Where It Belongs” this is well expressed by Nine Inch Nails (and remarkably rendered in this remix (we remix reality all the time)):

See the animal in his cage that you built
Are you sure what side you’re on?
Better not look him too closely in the eye
Are you sure what side of the glass you are on?
See the safety of the life you have built
Everything where it belongs
Feel the hollowness inside of your heart
And it’s all right where it belongs

What if everything around you
Isn’t quite as it seems?
What if all the world you think you know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself find yourself afraid to see?

What if all the world’s inside of your head?
Just creations of your own
Your devils and your gods all the living and the dead
And you really oughta know
You can live in this illusion
You can choose to believe
You keep looking but you can’t find the ones
Are you hiding in the trees?

What if everything around you
Isn’t quite as it seems?
What if all the world you used to know
Is an elaborate dream?
And if you look at your reflection
Is it all you want it to be?
What if you could look right through the cracks
Would you find yourself, find yourself afraid to see?

The central insight in this is one that underlies all of our discussions around information, propaganda, disinformation and misinformation, and that is the role of our identity. We exist – as facts – within the realities we dare to accept and ultimately our flight into alternate realities and shadow worlds is an expression of our relationship to ourselves.

Towards a glass bead game (The Structure of Human Knowledge as Game I)

Herman Hesse’s glass bead game is an intriguing intellectual thought experiment. He describes it in detail in his eponymous last novel:

“Under the shifting hegemony of now this, now that science or art, the Game of games had developed into a kind of universal language through which the players could express values and set these in relation to one another. Throughout its history the Game was closely allied with music, and usually proceeded according to musical and mathematical rules. One theme, two themes, or three themes were stated, elaborated, varied, and underwent a development quite similar to that of the theme in a Bach fugue or a concerto movement. A Game, for example, might start from a given astronomical configuration, or from the actual theme of a Bach fugue, or from a sentence out of Leibniz or the Upanishads, and from this theme, depending on the intentions and talents of the player, it could either further explore and elaborate the initial motif or else enrich its expressiveness by allusions to kindred concepts. Beginners learned how to establish parallels, by means of the Game’s symbols, between a piece of classical music and the formula for some law of nature. Experts and Masters of the Game freely wove the initial theme into unlimited combinations.”

The idea of a the unity of human knowledge, the thin threads that spread across different domains, the ability to connect seemingly disparate intellectual accomplishments — can it work? What does it mean for it to work?

On one level we could say that is simple – it is a game of analogy, and we only need to feel that there is a valid analogy between two different themes or things to assert them as “moves” in the game. We could say that the proof of the existence of an infinitude of primes is related to Escher’s paintings and argue that the infinite is present in both. The game – at its absolute lower boundary – is nothing else than an inspiring intellectual, collaborative essay. A game, then, consists of first stating the theme you wish to explore and then each player makes moves by suggesting knowledge that can be associated by analogy in sequence to the theme. This in itself can be quite interesting, I imagine, but it really is a lower boundary. The idea of the glass bead game being a game suggests that there is a way to judge progress in it, to juxtapose one game against another and argue that it is more masterful than the other.

Think about chess – it is possible to argue that one game in a Game (capital G Game being the particular variant of gaming, like chess, go or a boardgame) is more exciting and valuable than another, is it not? On what basis do we actually do that? Is it the complexity of the game? The beauty of the moves? How unusual it is? The lack of obvious mistakes? Why is a a game between Kasparov and Karpov more valuable in some sense than a game between me and a computer? (If we ignore, for a moment, the idea that a game between humans would have an intrinsically higher value than one between computers, something that seems dubious at best)? How do we ascribe value in the domain of games?

The aesthetic answer is only half-satisfying, it seems to me. I feel that there is also a point to be made about complexity, or about the game revealing aspects of the Game that were previously not clearly known. Maybe we could even state a partial answer by saying that any game that is unusual is more valuable than one that closely resembles already played games. Doing this suggests assigning a value to freshness or newness or simply variational richness. If we imagine the game space of a Game we could argue that there is greater value to a game that comes from an unexplored part of the game space. This idea, that the difference between a game and the corpus of played games could be a value in itself is not a bad one, and has actually been suggested as an alternative ground for intellectual property protection in the guise of originality (there always has to be an originality threshold, but beyond that). A piece that is significantly different from another (by mining the patterns of the corpus and producing a differential, say) could then be protected for longer or with broader scope, than one that is just like every other work in the corpus.

So, we could ascribe value through originality through analysis of the differential between the game and the corpus of played games (something like this seems to be going on in the admiration for AlphaGo’s games in the game community — there is a recognition that they represent an original – almost alien – way of playing go).

But originality only gets you so far in the glass bead game. I am sure noone has argued that Nietzsches theory of eternal recurrence can be linked to Joanna Newsom’s song Peach Plum Pear – but the originality of that association almost _lessens_ the value of the move in a glass bead game. There is an originality value function, but it exists within the boundaries of something else, of a common recognition of the validity of the move that we are trying to make within the theme we are exploring. So there has to be consistency with the theme as well as originality within that consistency.

Let’s examine an imaginary example game and see if we can reconstruct some ideas from that. Let us state that the theme is broad, the interplay between black and white in human knowledge. That theme is incredibly broad, but also specific enough to provide the _frame_ that we need in order to start working out possible moves that could suit and give us an idea. A valid move could be things like associating Rachmaninov’s piece Isle of the dead with Eisenstein’s principle of the use of color in movies (“Hence, the first condition for the use of color in a film is that it must be, first and foremost, a dramatic factor. In this respect color is like music. Music in films is good when it is necessary. Color, too, is good when it is necessary.”) By noting that Rachmaninov wrote his piece after having seen Böcklin’s painting The Isle of the Dead – but only in a black and white replica – and adding that he then was disappointed with the color of the original, we could device the notion of the use of black and white in non visual arts and science and then start to look for other examples of art and knowledge that seem to be inspired by or connected to the same binary ideas – testing ideas around two-dimensional Penrose tiling, I Ching, the piano keys, understanding the relationship to chess and exploring the general architecture and design of other games like go and backgammon, and othello…There exists a consistency here, and you could argue the moves are more or less orginal. The move from go to othello is less original than the move from Isle of the Dead to the I Ching (and then we could go back to other attempts to compose with the I Ching in a return move to the domain of music, after which we could land with leibnizian ideas inspired by that same book. It would seem that the binary nature of the I Ching then could be an anchor point in such a game).

It quickly becomes messy. But interesting. So the first two proto-rules of the game seem to be that we need originality within consistency. As we continue to explore possible rules and ideas we will at some point have to look at if there is an underlying structure that connects them. I would be remiss if I did not also reveal that I am interested in that because I wonder if there is something akin to a deep semiotic network of symbols that could be revealed by expanding machine translation to the domain of human knowledge over all. As has been documented, machine learning now can use deep structure of language to translate between two languages through an “interlingua”. At the heart of the idea of the glass bead game is the deceptively simple idea that there is such an interlingua between all domains of human knowledge – but can that be true?

The glass bead game – and the attempt to construct one – is a powerful play thing to use to start exploring that question.

Simone Weil’s principles for automation (Man / Machine VI)

Philosopher and writer Simone Weil laid out a few principles on automation in her fascinating and often difficult book Need for Roots. Her view as positive, and she noted that among workers in factories the happiest ones seemed to be the ones that worked with machines. She had strict views on the design of these machines however, and her views can be summarized in three general principles.

First, these tools of automation need to be safe. Safety comes first, and should also be weighed when thinking about what to automate first – the idea that automation can be used to protect workers is an obvious, but sometimes neglected one.

Second, the tools of automation need to be general purpose. This is an interesting principle, and one that is not immediately obvious. Weil felt that this was important – when it came to factories – because they could then be repurposed for new social needs, and respond to changing social circumstances – most pressingly, and in her time acute, war.

Third, the machine needs to be designed so that it is used and operated by man. The idea that you would substitute man by machine she found ridiculous for several reasons, but not least because we need to work to finds purpose and meaning, and any design that eliminates us from the process of work would be socially detrimental.

All Weil’s principles are applicable and up for debate in our time. I think the safety principle is fairly accepted, but we should not that she speaks of individual safety and not our collective safety. In the cases where technology for automation could pose a challenge for broader safety concerns in different ways, Weil does not provide us with a direct answer. This need not be apocalyptic scenarios at all, but could simply be questions of systemic failures of connected automation technologies, for example. Systemic safety, individual safety, social safety are all interesting dimensions to explore here – are silicon / carbon hybrid models always safer, more robust, more resilient?

The idea about general purpose and easy to repurpose is something that I think reflects how we have seen 3d printing evolve. One idea of 3D-printing is exactly this, that we get generic factories that can manufacture anything. But the other observation that is close at hand here is that you could imagine Weil’s principle as an argument for general artificial intelligence. It should be admitted that this is taking it very far, but there is something to that, and it is that a general AI & ML model can be broadly and widely taught and we would avoid narrow guild experts emerging in our industries. That would, in turn, allow for quick learning and evolution as these technologies, needs and circumstances change. General purpose technologies for automation would allow for us to change and adapt faster to new ideas, challenges and selection pressures – and would serve us well in a quickly changing environment.

The last point is one that we will need to examine closely. Should we consider it a design imperative to design for complementarity rather than substitution? There are strong arguments for this, not least cost arguments. Any analysis of a process that we want to automate will yield a silicon – carbon cost function that gives us to cost of the process as different parts of it are performed by machines and humans. A hypothesis would be that for most processes this equation will see a distribution across the two and only for very few will we see a cost equation where the human component is zeroed out. Not least because human intelligence is produced at extraordinarily low energy cost and with great resilience. There is even a risk mitigation strategy argument here — you could argue that always including a human element, or designing for complementarity, necessarily generates more resilient and robust systems as the failure paths of AIs and human intelligence look different and are triggered by different kinds of factors. If, for any system, you can allow for different failure triggers and paths, you seem to ensure that the system self-monitors effectively and reduces risk.

Weil’s focus on automation is also interesting. Today, in many policy discussions, we see the emergence of principles on AI. One could argue that this is technology-centric principle making, and that the application of ethical and philosophical principles suit the use of a technology better and that use-centric principles are more interesting. The use-case of automation is a broad one, admittedly, but an interesting one to test this on and see if salient differences emerge. How we choose to think about principles also force us to think about the way we test them. An interesting question is to compare with other technologies that have emerged historically. How would we think about principles on electricity, computation, steam — ? Or principles on automobiles and telephones and telegraphs? Where do we effectively place principles to construct normative landscapes that benefit us as a society? Principles for driving, for communicating, for selling electricity (and using it and certifying devices etc (oh, we could actually have a long and interesting discussion about what it would mean to certify different ML models!)).

Finally, it is interesting also to think about the function of work from a moral cohesion standpoint. Weil argues that we have no rights but for the duties we assume. Work is a foundational duty that allows us to build those rights, we could add. There is a complicated and interesting argument here that ties rights to duties to human work in societies from a sociological standpoint. The discussions about universal basic income are often conducted in sociological isolation, not thinking about the network of social concepts tied up in work. If there is, as Weil assume, a connection between our work and duties and the rights a society upholds on an almost metaphysical level, we need to re-examine our assumptions here – and look carefully at complementarity design as a foundational social design imperative for just societies.

Justice, markets, dance – on computational and biological time (Man / Machine V)

Are there social institutions that work better if they are biologically bounded? What would this even mean? Here is what I am thinking about: what if, say, a market is a great way of discovering knowledge, coordinating prices and solving complex problems – but only if it consists solely of human beings and is conducted at biological speeds? What if, when we add tools and automate these markets, we also lose their balance? What if we end up destroying the equilibrium that makes them optimized social institutions?
While initially this sounds preposterous, the question is worth examining. Let’s examine the opposite hypothesis – that markets work at all speeds, wholly automated and without any human intervention. Why would this be more likely, than for there to be certain limitations on the way the market is conducted?

Is dance still dance if it is performed in ultra-high speeds by robots only? Or do we think dance is a biologically bounded institution?
It would be remarkable if we found that there are a series of things that only work in biological time, but break down in computational time. It would force us to re-examine our basic assumptions about automation and computerization, but it would not force us to abandon them.

What we would need to do is more complex. We would have to answer the question of what is to computers as markets are to humans. We would have to build new, revamped institutions that exist in computational time and we would have to understand what the key differences are that apply and need to be integrated into future designs. All in all an intriguing task.

Are there other examples?

What about justice? Is a court system a biologically bounded system? Would we accept a court system that runs in computational time, and delivers an ultra fast verdict after computing the data sets necessary? A judgment delivered by a machine, rather than a trained jurist? This is not only a question of security – it is not just a question of if we trust the machine to do what is right. We know for a fact that human judges can be biased, and that even their blood sugar levels could influence decisions. Yet, we could argue that this does not need to concern us for us to be worried here. We could argue that justice needs to unfold in biological time, because that is how we savour it. That is how it is consumed. The court does not only pass judgment, it allows all of us to see, experience, hear justice be done. We need justice to run at biological time, because we need to absorb it, consume it.

We cannot find any moral nourishment in computational justice.

Justice, markets, dance. Biological vs computational time and patterns. Just another area where we need to sort out the borders and boundaries between man and machine – but where we have not even started yet. The assumption that whatever is done by man can be done better by machine is perhaps not serving us too well here.

A note on the ethics of entropy (Man / Machine IV)

In a comment on Luciano Floridi’s The Ethics of Information Martin Falment Fultot writes (Philosophy and Computers Spring 2016 Vol 15 no 2):

“Another difficulty for Floridi’s theory of information as constituting the fundamental value comes from the sheer existence of the unilateral arrow of thermodynamic processes. The second law of thermodynamics implies that when there is a potential gradient between two systems, A and B, such that A has a higher level of order, then in time, order will be degraded until A and B are in equilibrium. The typical example is that of heat flowing inevitably from a hotter body (a source) towards a colder body (a sink), thereby dissipating free energy, i.e., reducing the overall amount of order. From the globally encompassing perspective of macroethics, this appears to be problematic since having information on planet Earth comes at the price of degrading the Sun’s own informational state. Moreover, as I will show in the next sections, the increase in Earth’s information entails an ever faster rate of solar informational degradation. The problem for Floridi’s theory of ethics is that this implies that the Earth and all its inhabitants as informational entities are actually doing the work of Evil, defined ontologically as the increase in entropy. The Sun embodies more free energy than the Earth; therefore, it should have more value. Protecting the Sun’s integrity against the entropic action of the Earth should be the norm.”

At the heart of this problem, he argues, is that Floridi defines information as something good, Fultot argues, and hence the opposite is something evil – and he takes the opposite of information and structure to be entropy (this can be discussed). But there seems to be a lot of different possibilities here, and the overall argument deserves to be examine much closer, it seems to me.

Let’s ask a very simple question. Is entropy good or evil? And more concretely: do we have a moral duty to act as to maximize or minimize the production of entropy? This question may seem silly, but it is actually quite interesting. If some of the recent surmises about how organization and life can exist in a universe that tends to disorganization and heat death are right, the reason life exists – and will be prevalent in the universe – is that there is a hitherto undiscovered law of physics that essentially states that not only does the universe evolve towards more entropy, but it organizes itself as to increase the speed with which it does so. Entropy accelerates.

Life appears, because life is the universe’s way of making entropy faster.

As a corollarium technology evolves – presumably everywhere where there is life – because technology is a good way to make entropy faster. An artificial intelligence makes entropy much faster than a human being as it becomes able to take on more and more general tasks. Maybe there is even a “law of artificial intelligence and entropy” that states that any superintelligence necessarily produces more entropy than any ordinary intelligence, and that any increase in intelligence means an increase in the production of entropy? That thought deserves to be examined closer and in more detail, and clarified (I hope to return to this in a later note — the relationship between intelligence and entropy is a fascinating subject).

Back to our simple and indeed simplistic question. Is entropy good or evil? Do we have duty to act to minimize it or to maximize it? A lot of different considerations prop up and possible theories and ideas are rich and complex. Here are a number of possible answers.

  • Yes, we need to maximize entropy, because that is in line with the nature of the universe and ethics, ultimately, is about acting in such a way that you are true to the nature and laws you obey – and indeed, you are a part of this universe and should work for its completion in heat death. (Prefer acting in accordance with natural laws)
  • No, we should slow down the production to make it possible to observe the universe for as long as possible, and perhaps find an escape from this universe before it succumbs to heat death. (Prefer low entropy states and “individual” consciousness to high entropy states).
  • Yes, because materiality and order are evil and only in heat death do we achieve harmony. (Prefer high entropy states to low).

And so on. The discussion here also leads to another interesting question, and that is if we can, indeed, have an ethics of anything else than our actions against one other individual in the particular situation and relationship we find ourselves. A situationist reply here could actually be grounded in the kind of reductio ad absurdum that many would perceive an ethics of entropy to be.

As for technology, the ethical question then becomes this: should we pursue the construction of more and more advanced machines, if that also means that they produce more and more entropy? In the environmental ethics the goal is sustainable consumption, but the reality is that from the perspective of an ethics of entropy, there are no sustainable solutions. Just solutions that slow down the depletion of organization and order. That difference is interesting to contemplate as well.

The relationship between man and machine can also be framed as one between low entropy and high entropy forms of life.

On not knowing (Man / Machine III)

Humans are not great at answering questions with “I don’t know”. They often seek to provide answers even where they know that they do not know. Yet still, one of the hallmarks of careful thinking is to acknowledge when we do not know something – and when we cannot say anything meaningful about an issue. This socratic wisdom – knowing that we do not know – becomes a key challenge as we design systems with artificial intelligence components in them.

One way to deal with this is to say that it is actually easier with machines. They can give a numeric statement of their confidence in a clustering of data, for example, so why is this an issue at all? I think this argument misses something important about what it is that we are doing when we say that we do not know. We are not simply stating that a certain question has no answers above a confidence level, we can actually be saying several different things at once.

We can be saying…
…that we believe that the question is wrong, or that the concepts in the question are ill-thought through.
…that we have no data or too little data to form a conclusion, but that we believe more data will solve the problem.
…that there is no reliable data or methods of ascertaining if something is true or not.
…that we have not thought it worthwhile to find out or that we have not been able to find out within the allotted time.
…that we believe this is intrinsically unknowable.
…that this is knowledge we should not seek.

And these are just some examples of what it is that we are possibly saying when we say “I don’t know”. Stating this simple proposition is essentially a way to force a re-examination of the entire issue to find the roots of our ignorance. Saying that we do not know something is a profound statement of epistemology and hence a complex judgment – and not a statement of confidence or probability.

A friend and colleague suggested, on discussing this, that it actually makes for a nice version of the Turing test. When a computer answers a question by saying “I don’t know” and does so embedded in the rich and complex language game of knowledge (as evidenced by it reasoning about it, I assume), it can be seen as intelligent in a human sense.
This socratic variation of the Turing test also shows the importance of the pattern of reasoning, since “I don’t know” is the easiest canned answer to code into a conversation engine.

*

There is a special category of problems related with saying “I don’t know” that have to do with search satisfaction and raise interesting issues. When do you stop looking? In Jeremy Groopman’s excellent book on How Doctors Think there is an interesting example of radiologists. The key challenge for this group of professionals, Groopman notes, is when to stop looking. You scan an x-ray, find pneumonia and … done? What if there is something else? Other anomalies that you need to look for? When do you stop looking?

For a human being that is a question of time limits imposed by biology, organization, workload and cost. The complex nature of the calculation for stopping allows for different stopping criteria over time and you can go on to really think things through when the parameters change. Groopman’s interview with a radiologist is especially interesting given that this is one field that we believe can be automated to great benefit. The radiologist notes this looming risk of search satisfaction and essentially suggests that you use a check schema – trace out the same examination irrespective of what it is that you are looking for, and then summarize the results.

The radiologist, in this scenario, becomes a general search for anomalies that are then classified, rather than a specialized pattern recognition expert that seeks out examples of cancers – and for some cases the radiologist may only be able to identify the anomaly, but without understanding it. In one of the cases in the book the radiologist finds traces of something he does not understand – weak traces – that then prompts him to do a biopsy, not based on the picture itself, but on the lack of anything on a previous x-ray.

Context, generality, search satisfaction and gestalt analysis are all complex parts of when we know and do not know something. And our reactions to a lack of knowledge are interesting. The next step in not knowing is of course questioning.

A machine that answers “I don’t know” and then follows it up with a question is an interesting scenario — but how does it generate and choose between questions? There seems to be a lot to look at here – and question generation born out of a sense of ignorance is not a small part of intelligence either.

Hannah Arendt on politics and truth – and fake news? (Notes on attention, fake news and noise #6)

Any analysis of fake news would be incomplete without a reading of Hannah Arendts magnificent essay Truth and Politics from 1967. Arendt, in this essay, examines carefully the relationship between truth and politics, and makes a few observations that remind us of why the issue of “fake news” is neither new nor uniquely digital. It is but an aspect of that greater challenge of how we reconcile truth and politics.

Arendt anchors the entire discussion solidly not only in a broader context, but she reminds us that this is a tension that has been with civilization since Socrates. “Fake news” is nothing else than yet another challenge that meets us in the gap between dialectic and rhetoric, and Socrates would be surprised and dismayed to find us thinking we had discovered a new phenomenon. The issue of truth in politics is one that has always been at the heart of our civilization and our democratic tradition.
Arendt notes this almost brutally in the beginning of her essay:

“No one has ever doubted that truth and politics are on rather bad terms with each other, and no one, as far as I know, has ever counted truthfulness among the political virtues. Lies have always been regarded as necessary and justifiable tools not only of the politician’s and the demagogue’s but also of the stateman’s trade.” (p 223)

It is interesting to think about how we read Arendt here. Today, as politics is under attack and we suffer from an increase of rhetoric and the decline of dialogue, we almost immediately become defensive. We want to say that we should not deride politics, and that politics deserve respect and that we should be careful and ensure that we do not further increase people’s loss of faith in the political system of democracy — and all of this is both correct and deeply troubling at the same time. It shows us that our faith in the robustness of the system has suffered so many blows now that we shy away from the clear-eyed realization that politics is rhetoric first and dialogue only second (and bad politics never gets to dialogue at all).

Arendt does not mean to insult our democracy, she merely recognizes a philosophical analysis that has remained constant over time. She quotes Hobbes as saying that if power depended on the sum of the angles in a triangle not being equal to the sum of two angles in a rectangle, then books of geometry would be burned by some in the streets. This is what politics is – power – and we should not expect anything else. That is why the education of our politicians is so important, and their character key. Socrates sense of urgency when he tries to educate Alcibiades is key, and any reader who read the dialogues would be aware of the price of Socrates failure in what Alcibiades became.

Arendt also makes an interesting point on the difference between what she calls rational truths – the mathematical, scientific – and the factual ones and point out that the latter are “much more vulnerable”. (p 227) And factual truth is the stuff politics are made of, she notes.

“Dominion (to speak Hobbes’ language) when it attacks rational truth oversteps, as it were, its domain while it gives battle on its own ground when it falsifies or lies away facts.” (p 227)

Facts are fair game in politics, and has always been. And Arendt then makes an observation that is key to understanding our challenges and is worth quoting in full:

“The hallmark of factual truth is that its opposite is neither error nor illusion nor opinion, not one of which reflects upon personal truthfulness, but the deliberate falsehood, or lie. Error, of course, is possible, and even common, with respect to factual truth, in which case this kind of truth is in no way different from scientific or rational truth. But the point is that with respect to facts there exists another alternative, and this alternative, the deliberate falsehood, does not belong to the same species as propositions that, whether right or mistaken, intend nor more than to say what is, or how something that is appears to me. A factual statement – Germany invaded Belgium in August 1914 – acquires political implications only by being put in an interpretative context. But the opposite proprosition, which Clemenceau, still unacquainted with the art of rewriting history, though absurd, needs no context to be of political significance. It is clearly an attempt to change the record, and as such it is a form of _action_. The same is true when the liar, lacking the power to make his falsehood stick, does not insist on the gospel truth of his statement but pretends that this is his ‘opinion’ to which he claims his constitutional right. This is frequently done by subversive groups, and in a politically immature public the resulting confusion can be considerable. The blurring of the dividing line between factual truth and opinion belongs among the many forms that lying can assume, all of which are forms of action.
While the liar is a man of action, the truthteller, whether he tells a rational or factual truth, most empathically is not.” (p 245)

Arendt is offering an analysis of our dilemma in as clear a way as can be. Lying is an action, telling the truth is most emphatically not, and the reduction of a falsehood to an opinion creates considerable confusion, to say the least. The insight that telling the truth is less powerful than lying, less of an action is potentially devastating – liars has something at stake, and truth tellers sometimes make the mistake of thinking that relaying the truth in itself is enough.

But Arendt also offers a solution and hope — and it is evident even in this rather grim quote: she speaks of a politically immature public, and as she closes the essay she takes great pains to say that these lies, these falsehoods, in no way detracts from the value of political action. In fact, she says that politics is a great endeavor and one that is worthy of our time, effort and commitment – but ultimately we also need to recognize that it is limited by truth. Our respect – as citizens – for truth is what preserves, she says, the integrity of the political realm.

As in the platonic dialogues, as in Hobbes, as everywhere in history – truth is a matter of character. Our own character, honed in dialogue and made resistant to the worst forms of rhetoric. This is not new – and it is not easy, and cannot be solved with a technical fix.

Link: https://idanlandau.files.wordpress.com/2014/12/arendt-truth-and-politics.pdf