A few years back I taught an undergraduate course on truth. Most people don’t use the word “truth” in a very coherent way. They tend to use it to refer to things they take to be (or that are) true, as when someone talks about “the truth”. But traditionally, truth is a property of representations. For example, we can divide statements people make into two groups: those that are true, and those that are false. If I say “it’s spring” in the middle of December, my statement is false; if I say “vitamin A is toxic in polar a bear’s liver”, my statement is true. What distinguishes true from false representations is a big philosophical question, but the standard story says that the true representations are the ones that match, or correspond to, reality. Grasping this concept of truth requires grasping the distinction between representations and reality. I found, to my surprise, that many (perhaps most?) of my students did not start the class already grasping the distinction between a representation and reality, and that even after many weeks of class, they didn’t really get it.
This is surprising because the distinction seems obvious. For example, if I unfold in front of you a paper road map of Texas, the piece of paper I’ve put in front of you is a representation. What it represents is the geographical configuration of roads in Texas — some bit of reality. Clearly, the actual layout of physical roads in Texas (the represented bit of reality) isn’t the same thing as its representation by the ink marks on the paper map (the representation). Similarly: a history book is different from the history it recounts, a picture is different from what it depicts, the sentence “grass is green” is different from the actual instantiation of the color green by grass, and the fuel gauge in my car is different from the level of fuel in my car’s gas tank.
Of course, if you point out these examples, I suppose people grasp them well enough. But someone can understand that (for example) a picture of Muhammad Ali is different from Muhammad Ali without actually understanding the general distinction between representation and reality. I witnessed this disconnect first hand in my class as I saw my students struggle to work through abstract philosophical questions that presupposed the distinction, but in everyday life people say things all the time which demonstrate their lack of understanding. Most obviously, people just don’t use the word “reality” as if they properly distinguish reality from our representations of reality. For example, the polarization of news around and after the 2016 US presidential election lead to much commentary about how “reality is relative to political party”, “facts don’t matter”, and “truth is dead”. Now, there is an extreme kind of relativism which does deny the objectivity of reality, but this is clearly not what these commentators meant. They meant to say that what people believe, or what people take to be reality, is relative to political party. Beliefs are representations, and by failing to make this distinction, these commentators failed to appreciate the distinction between representation and reality.
(It might be suggested that these commentators, and the readers or viewers who uncritically accepted and propagated these slogans, were just speaking loosely. But I’m very skeptical that the average person saying this sort of stuff really gets it, and is just trying to be pithy by saying stuff like “reality is relative”; for example, I don’t expect the average person to be able, if pressed to clarify these slogans, to make the distinction between representations and reality.)
Anyway, I now think there’s a good explanation of why the distinction between representations and reality is so difficult to understand. The problem is that representations are transparent (or, at least, are often transparent). I think (a) the transparency of representations explains (at least in large part) why people struggle to understand the difference between representation and reality, and (b) that transparency also explains why people are so easily persuaded by false, or inaccurate, representations, even when those representations have tell-tale signs of falsity. For example, as I’ll explain, transparency explains at least in part why people fall for fake news, propaganda, bad internet memes, and scam emails.
So far I’ve focused on external representations we use: e.g., linguistic tokens, pictures, maps, etc. But the states of our brain are also representations. These states include our beliefs and perceptual experiences. G.E. Moore famously pointed out in a 1903 paper that perceptual experiences are transparent. By this he meant that if you try to attend to features of your perceptual experience itself, you find all that’s available are the things you’re experiencing. For example, as I look at this pencil I’ve just picked up, all I see is the pencil. I do not notice anything that strikes me as not being a feature of the physical pencil stimulating my retina. I don’t notice my representational state itself.
So, in this way, the representational states of my brain which constitute my perceptual experience are transparent to me. I notice only what they represent, not the representation itself.
Even if the external representations we use (like linguistic tokens, pictures, maps, etc) aren’t quite transparent, they often go unnoticed. If I show you a physical picture hanging on a wall (for example), you can just as well attend to the picture itself (e.g., noticing defects in the frame or canvas) as you can to what it represents. So these external representations aren’t transparent in the same way perceptual experiences are transparent. But just because in some special circumstances, when the representation itself is particularly salient to us, we notice and attend to the representation itself, does not mean that we always do it, or that it’s easy or especially natural for us to notice and attend to representations (as opposed to what we represent). When we use external representations, the natural thing to do is ignore the representation itself and focus on what it represents.
There are some well-known phenomena which suggest that we usually don’t notice representations, or at least that our use of representations tends to greatly attenuate how much we notice the representations themselves. For example, consider how a language sounds different to you once you know it. Presumably this is because when the language was not understood, you noticed and attended much more to the full range of acoustic features of the sounds people made. Once you know the language, your brain ignores most of the acoustic features and instead focuses on semantic features (like word meaning) and acoustic features related to those.
For a second example, consider how, even when looking at pictures, there are many features of the picture itself we do not readily notice. If you see a realistic picture of some scene, you will see the 3D layout of the depicted scene, but probably not notice (or pay attention to) the 2D layout of the actual marks on the picture. For example, if asked to reproduced those 2D marks (to reproduce the picture), even in rough form, you likely could not do it, or you’d have to work backwards from the depicted 3D scene. It takes special training, or lots of practice, for artists to become proficient at working with 2D marks depicting 3D scenes.
So, the (first) suggestion is that this ease with which we ignore even external representations, and instead simply pay attention to what they represent as we use them, is at least part of why so many people struggle with the distinction between representation and reality. Although we use representations constantly, every day, we very rarely actually notice the representations themselves. Instead, we pay attention to what they represent. It may take special training to notice representations themselves, at least when something doesn’t naturally draw our attention to them.
Falling for memes, propaganda, and other obvious inaccuracies
Now, my (next) main claim here is that many representations, such the typical image files shared on social media (memes) or pieces of propaganda, have obvious tell-tale signs of falsity, distortion, or inaccuracy. Something similar goes for the sales pitches of scam artists or phishing (email) attacks. At first glance, it’s surprising that more people don’t pick up on these obvious signs, but a simple explanation is that they miss them for the same reason most people don’t naturally hear all the acoustic features of spoken language or notice the 2D layout of marks on a picture: these tell-tale signs are usually transparent to the person “reading” the meme, propaganda, scam sales pitch, etc.
Take scams, as a first example. I think we all know scams are normally formulaic. For example, advance-fee scam (problematically called “Nigerian prince” scam) and password-rest phishing emails all basically look the same. Anyone who pays attention to the actual form and precise wording of these emails should be able to pick them up quickly. Still, a lot of people fall for these, even when they’ve seen them before and should know better. Why? Well, it might be because, when reading quickly, they don’t notice the relevant scam-signaling features (e.g., the bad grammar, the formulaic content, etc). Instead, what they notice is what’s represented, or conveyed, by the email: e.g., an opportunity to make money, or the need to reset their password. Opportunities and needs are bits of reality, distinct from the representations, i.e. emails, (purporting to) communicate them. Victims of these scams only register the represented opportunity or need, not the precise form of the emailing (purporting to) communicate, i.e. represent, that opportunity or need.
Next, take fake news, propaganda, and social-media memes. Here I’ll discuss one example I saw recently in my own Facebook feed. In response to complaints that irresponsible behavior (e.g., not social distancing) was responsible for the recent spike of COVID-19 cases in Houston, someone posted a graph which showed the number of daily new COVID-19 hospitalizations in Houston. (As I write this post Houston is experiencing a dramatic second wave of COVID-19 cases.) The graph was labelled with events, stuff like the issuance of a stay-at-home order by the Texas Govenor and the reopening of restaurants. Also labelled was the start of Black Lives Matter protests in Houston (which happened earlier this month), as well as Memorial day. What was notable was that while all the other events were labelled in blue, the BLM protest start date was labelled in green, and Memorial day was labelled in red. Coinciding with those two events, the line for daily new hospitalizations shot up.
Now, I don’t even know if the graph itself (and the data it displays) is accurate. Like many political memes, this one was not sourced. I also don’t know if the event dates labelled on the graph are actually accurate, either. But, the point is (i) that by labelling the BLM protests and Memorial day with different colors from the colors used to label all the other events, the reader’s attention is drawn to those events, and (ii) by drawing attention to those events, it’s more likely that the reader will jump to infer that the coinciding upward trend in hospitalizations was caused by them. (In reality, the upward trend is likely the result of a much more complex web of causes, including easing social distancing restrictions too soon, or people just not following them; Memorial day and the BLM protests were almost certainly not events singularly responsible for the now massive spike in COVID-19 cases in Houston.) So, the graph is labelled and setup in a way that’s obviously meant to lead the reader to draw certain conclusions. Further, this leadingness should be obvious: after all, why, exactly, are those two events colored differently? That’s not a choice a researcher would make if they were just looking to objectively display the data.
The point I want to make is that this fishy and leading color choice evidently did not signal to those sharing this graph that it might be misleading, inaccurate, or that its makers might have had an agenda. But, shouldn’t the odd label colors have been an obvious tell-tale sign that something was not right with this representation? Well, it should have been — if it was noticed. But it likely wasn’t noticed by those sharing the graph. What they noticed was some part of what the graph (including the labels) represented, or communicated: the (purported) singular causal role of BLM protests and Memorial day in Houston’s COVID-19 case spike.
I suspect that when the content of a representation matches up with our prior beliefs, we’re especially likely to ignore features of the representation itself (including potential signs of its falsity). Only when we stumble upon representations that represent something we don’t believe (e.g., the memes of the opposite political party) do we start to pay attention to the representation itself. When what a representation represents fits with how we understand the world, the representation goes unnoticed — including any tell-tale signs of falsity.