Why are people naturally inclined to be thoroughly convinced by confirming anecdotes, while also being naturally inclined to ignore falsifying ones? For example, Republicans and Democrats are, respectively, apt to grab onto any story of voter fraud or police violence as proof for their favored narratives about election security or systemic racism. (Another example: Many people think crime in the US is on the rise, though it’s been falling for years, presumably based on anecdotes.) Now, these narratives may be true, but the only way to know is through careful statistical analysis. The point is that people will be convinced by the anecdotes, without consideration of such analyses, and are even likely to dismiss contrary statistical evidence (or contrary anecdotes) out of hand.
I want to propose that we can explain this epistemically vicious reception of anecdotes by appealing to two phenomenological features of information channels. The notion of an information channel with which I’m working here roughly comes from Gareth Evans. We can think of an information channel as any route through which some causal chain of events carries information to us. So, for example, the light reflecting from the objects around me, plus the transduction of this light at my retinas and the subsequent propagation of action potentials through my visual circuits, constitutes one information channel. Second example: When I hear about breaking news through a media broadcast, I get that news through a channel consisting of the device I’m watching or reading, the network signals carrying the data stream to that device, the cameras or computers at the media outlet office, the reporters who produced the media I consumed, and their own perceptual interactions with the reported event or eyewitnesses of the reported event.
Now, borrowing ideas from Ruth Millikan, we can think of the various intermediate stages of information channels as natural signs. For example, voltage changes across the membranes of retinal neurons are signs of light-reflecting stimuli, and the illuminated screen off which I consume a media story is a sign of the network data being displayed, which is a sign of the key-taps the reporters made back at their office, which is a sign of the event they’re reporting. Millikan (and others) pointed out that some of these “natural signs” are used by “consumers” of the information channel — e.g., my computer uses the network data to produce the images it displays, the visual circuits in my brain use retinal membrane changes to compute features of the scene in front of me, etc.
Now, obviously, at the end of many information channels (the ones which concern us) sits a special user: You, me, or some other conscious, sentient agent. Typically, for us conscious users, these information channels result in an experience. For example, in virtue of the way information flows through my various sensory systems, I come to have visual, auditory, haptic (etc) experiences of the various things stimulating my sensory receptors.
Following G.E. Moore, these experiences are usually transparent with respect to the intermediate representations. For example, as I look at the coffee table in front of me, I do not experience the light itself or the voltage changes of my neurons. I just experience the coffee table itself. As I’ve argued elsewhere, the same — perhaps surprisingly — often goes for information channels that extend beyond basic sensory perception. For example, if I read a tweet or a story on my Facebook feed, I’m likely to ignore (or just not notice) all the intermediate steps: The actual physical image displayed on my screen, the words constituting it, the data stream generating that image, the key-taps of whoever wrote it, etc. (This explains, I think, we we so often miss obvious tell-tale signs of falsity, distortion, or inaccuracy within the representations we’re consuming; it also explains, I suggest, why people are so bad at grasping the distinction between representations and what they represent.)
Now, anecdotes are just specific intermediary links in information chains we consume. Anecdotes are signs of more general states of affairs — e.g. one case of voting fraud or police violence is a sign of a more systematic problem — or, at least, our brains are wired to treat anecdotes as signs of more general states of affairs. So, for instance, if I see a tweet about a case of voting fraud or police violence, I’m apt to take that anecdote as a sign for a more general problem. But as with most intermediary signs, I’m also apt to not so much notice the anecdote itself as I just directly experience the more general state of affairs. Of course, I know, at some level, that I’m reading a tweet about one particular case, but my knowledge or experience of that one particular case is paralleled by another experience of the general state I take the particular case to signal, and I experience that general state directly, while losing track of how my knowledge of it flowed through the particular case.
It might be objected that when I read a tweet about a particular case of voter fraud or police violence I don’t experience the general state of affairs that (my brain treats) the case as signaling. Certainly I don’t have a sensory experience of it. I don’t literally see or hear the systematic fraud or racism signaled by the case. But, my brain certainly represents that signaled systematic fraud or racism. My brain registers the idea and adds it to its overall model that functions to track the way the world is. (There is a case to be made that informational uptake defaults to belief, so that when we encounter anecdotes we, by default, believe the general states we take them to signal.)
This overall “world model” our brains use to track reality (i.e. to track the way the world is) is, plausibly, integrated so that information obtained through different channels all adds to the same model. We do not keep track of informational sources, e.g. that this bit of the model was based on perception, that bit on testimony or a tweet. Further, like any other representation, this model is transparent to us. We experience the world represented by the model (and experience it as the actual world around us), not the model itself.
The crucial claim is that this model does give rise to experience, and that experience is continuous with our more direct sensory experiences. For example, right now I know that there’s a stairway right outside my apartment door. I cannot, at present, see that stairway, but I’ve walked out the door and down the stairway many times. So, the layout of my apartment is part of my world model. There is some phenomenology to that world model (even if not sensory): It is for me as if there’s a stairway on the other side of the door I’m now seeing, a stairway which I could reach if only I got up and walked out the door. My knowledge of the stairway is integrated into my more direct sensory experience of my immediate surroundings — it feels to me as if that reality involving the stairway is accessible in a way that’s continuous with the environment I now experience directly through my senses. Given that all (of our) information channels have purely sensory channels at our end, and that all information is integrated into a single world model, it’s not surprising that experience of the world represented by this model is integrated continuously with our purely sensory experiences.
We have now enough to give a phenomenological explanation for why anecdotes are so convincing. The two key features are (1) that anecdotes are transparent within information channels, and (2) that the general states (we take to be) signaled by anecdotes are incorporated into a world model that’s experienced as continuous with our immediate sensory perception. So, for example, as you read that tweet about a particular case of voter fraud or police violence, you have an experience as of systematic fraud or racism, and that experience presents that systematic fraud or racism as if it’s part of the same world you’re experiencing now through your senses. It’s almost, for you, as if there’s some path you could walk which would take you from your current spot to a place where the systematic fraud or racism was manifest — as if there was a way to shorten the information channel and bring yourself into sensory contact with the systematic fraud or racism itself. Worse yet, you enjoy this experience of the general state of affairs without noticing or experiencing the role the anecdotes played in bringing it about. Just as my visual experience of my coffee table presents the coffee table to me as if I was in direct contact with it, sans any intermediate signal processing, the experience of general states of affairs presents those affairs to you as if you were in direct contact with them, sans the intermediate anecdotes.
To summarize, when you’re presented with an anecdote, it is for you as if you are directly presented with the general state of affairs (you take) that anecdote to signal — a state of affairs which you further experience as seamlessly integrated into the world you’re now perceiving through your senses. The anecdote itself is no more a part of your experience of the general state than neural activity is part of your visual experience of the environment around you. If that’s what it’s like when presented with an anecdote, it’s not hard to see why people would be so completely swayed by them and ignore statistical data (which is, presumably, less apt to be integrated into our world model). After all, we normally accept our sensory experiences uncritically, as if they simply reveal the world around us, so likewise it would be expected that we uncritically accept other experiences which are phenomenally continuous with them.
Of course, as I’ve noted before, transparency can break down. If someone gives you anecdotes contradicting your view, you’re apt to see them for what they are: anecdotes. But this goes in general for the intermediate steps of information channels — e.g. when I see a tweet or meme contradicting my views, I’m apt to notice the representation itself and any tell-tail signs of falsity. So, I think this also in part explains why we’re apt to discount contradicting information. When we encounter it, transparency is broken, we recognize that we’re dealing with some complicated chain of evidence, and we start to scrutinize (or over scrutinize) that chain, looking for any basis at all for discounting it.