Why demonstrative thought matters
My first two published papers were on demonstrative thought. At first glance, these papers are esoteric. The first argued that phenomenal consciousness guides voluntary attention in selecting targets of demonstrative thought. The second argued that an epistemically robust information link to a potential target is not necessary to select that target for demonstrative thought. That’s a lot of jargon. It may seem as if this is stuff only specialists could care about — and may be hard to see why they care.
But these issues are rather basic. They are deeply important and matter a lot for getting a grip on how we come to know, and think about, the world. This post is my attempt to synthesize the material in those papers (along with another paper not yet published).
Let’s start with a basic problem. How is it we manage to think about objects out in the environment? Take, for instance, my glass of beer, which is sitting on the table besides me. The question is not how do I manage to think (simpliciter), or how do I manage to get certain concepts (like “glass of beer”). The question is how do I ever manage to think thoughts about that particular glass of beer? To use some evocative language, how is it that I’m able to bring that particular glass of beer into mind? Say I judge that the glass is filled with beer. How am I able to “latch onto” the glass so that I can ascribe to it the property of containing beer? After all, the world (including the glass) is “out there”, while I (and my thoughts) are stuck “in my mind”. Somehow mind must bridge out into the world, or world must be brought into mind.
Here’s another way to put the problem. Right now digital bits encoding these words are streaming from a web server to your computer. Those bits are about that glass of beer on my table, but only because you and I are able to interpret and apply meaning to them. On their own, those bits mean nothing and the server and your computer certainly aren’t thinking thoughts about my glass merely by tokening those bits. If our brains are like the server or your computer, trading in neural encodings of information without anyone “outside” to interpret them, how do they manage to think thoughts that “connect” with the outside world?
Since Gareth Evans, philosophers have been attracted to a simple solution to this problem. Unlike the web server and your computer, the neural encodings of information flowing through your brain are actually connected to stuff out in the world. You have sensory systems, like a visual system with photoreceptors, which pickup information in the distal environment. Much of the neural encoding in your brain is connected, through sensory-causal chains that carry information, to stuff in the distal environment. Very roughly, the idea is that these information links confer a kind of intrinsic or primitive intentionality to your thoughts. While the bits streaming through your computer are only meaningful in virtue of how someone interprets them, the causal connection (via your sensory systems) to the outside world affords your neural activity meaning without need for any external interpreter.
Thoughts which get their meaning via these sensory information links are (what Evans and others) call demonstrative thoughts. The problem with this approach — or so I argue — is that it’s completely wrong. Well, perhaps not completely wrong. But, it’s wrong in some fundamental way. The problem, so I argue, is that we can still think demonstrative thoughts even when we lack these sensory information links. Specifically, we can still think demonstrative thoughts even when we lack information links as Evans and others conceive of them. Evans and those who follow him think that thought-enabling information links have special properties. For example, they think (1) these links are reliable sources of information (of the sort which justify beliefs formed on their basis) and (2) that the brain organizes these links in a special way (i.e., into “mental files”) which explains how the brain manages to use them to “latch onto” external objects.
At this point we’ve crept back into obscure debates among specialists, but I think there’s a more foundational issue with Evans’ account: it completely leaves out the role of consciousness! Intuitively, it seems to us that consciousness has a lot to do with how we manage to get the outside world “in mind” (see here and here). For example, from my point of view, I seem able to think about my glass of beer simply because I see it. I look up, and there it is! My visual experience presents the glass to me, or brings the glass into my consciousness as a target of thought. To put it another way, my visual experience seems to reveal the glass to me. Specifically, the special phenomenological or qualitative character of my visual experience reveals or discloses the glass to me.
The point is that any story about how we manage to make contact with the world — how we manage to think about what’s outside our heads — seems like it needs to acknowledge the role of consciousness. Our senses afford us phenomenal experiences of the external world, and its through these experiences that we’re able to access or think about the world. Now, the proponent of Evans’ information-link approach may say that consciousness is a mere epiphenomenon; they may say that what really does the work are information links of some sort — it’s just that a by-product of that information is consciousness, and we mistakenly think consciousness is doing the heavy lifting. The problem, so I think, is just that there’s no way to work out this story purely in terms of information links. Any such attempt will have to appeal to the two special properties mentioned above, appealing to those two properties to do important explanatory work, and (as I’ve mentioned) it just turns out that information links with these properties aren’t actually needed for demonstrative thought!
The upshot is that there’s something deeply flawed with the information-link approach to explaining demonstrative thought, and something deeply important about consciousness which this approach misses. My view is that the naive account is roughly correct: it is consciousness which enables thought about the external world. Now, that’s not to say that consciousness itself is somehow primitive or mysterious. There is, I think, good naturalistic, physical explanations for consciousness, and (thus) some deeper story about how mind connects to world. I even think that the flow of sensory information is an important part of consciousness. It’s just that I don’t think, when you unpack all this, you’ll get the standard information-link account found in Evans and others. While perceptual experience may provide us an information link to the world, it doesn’t enable us to think about the world because it provides an information link.
So how is it that we manage to get the world into mind? How do we manage to escape our skulls and have external objects as targets of thought? It’s through our perceptual experience, which reveals or discloses those objects to us. Consciousness is, at bottom, a way of bringing world into mind. That’s a profound idea. While I’m far from the first to propose it, it’s largely fallen out of favor among philosophers, who (following Evans) instead explain our ability to think about the world in terms of information links. The aim of my project has been to show that this explanation fails, and to carve out more carefully how experience enables thought.