Empty minds

I’ve long been fascinated by the idea that what we say using language is actually without meaning. For example, if I say “atoms of aluminum have 13 protons”, I seem to be using words to express, or describe, that atoms of aluminum have 13 protons. The quoted stuff—the words I speak—is a representation, representing some physical state of affairs. I understand the sentence “atoms of aluminum have 13 protons”, i.e. I know how it represents the world to be. When you read it, you likewise comprehend what it’s saying—how it represents the world to be. But if linguistic-meaning skepticism is correct, none of that’s true. The sentence has no meaning, it doesn’t succeed in representing the world (because it doesn’t represent anything), and there’s nothing in it for you to comprehend. In fact, if linguistic-meaning skepticism is true, the feeling you have now of understanding these words you’re reading is illusory, since these typed words also lack any meaning to be understood.

While linguistic-meaning skepticism might at first sound obviously false, there are some not implausible arguments for it. Plus, it’s just such a provocative idea that it’s worth exploring. If linguistic-meaning skepticism (LMS) is true, what we say using words is always empty. From the point of view of representing the world, we might as well be babbling nonsense (although LMS doesn’t preclude that language has pragmatic effects not shared with babbled nonsense—e.g., me saying “get me a coffee” causes you to get me a coffee). In this post I want to explore the extension of LMS to thought and perceptual experience. Is it the case that our thoughts and experiences are likewise devoid of any content?

The argument for LMS

My introduction to LMS came through Quine’s paper “Ontological Relativity”. The gist of his argument is that linguistic meaning can only be learned and (more importantly) fixed through observable behaviors, such as someone nodding in affirmation when I show them an apple and say the word “apple”. But these meaning-fixing behaviors seem to underdetermine meaning: with the right clever tricks, we can always find multiple alternative interpretations of the words someone’s using that explain their behavior. Since these behaviors are the only things that can fix meaning, it turns out that our words just lack determinate meanings.

What’s trippy is that this argument doesn’t just run when considering translating some new language into one you already know, like English, it runs when trying to “translate” (or interpret) the words of other English speakers (so as to double check that they’re using them in the same way you are), and when trying to interpret your own use of words! For example, if the argument works, you can’t even be sure of what you yourself mean when you use basic terms like “apple” or “car”.

Now, Quine’s argument just isn’t that convincing. First, while he’s probably right that any small set of linguistic behaviors can be interpreted in many ways, it seems highly unlikely that the total linguistic behavior, over years, of an entire group of language users is open to this sort of radical reinterpretation. Second, Quine was most likely wrong in thinking that the only thing which can fix meaning are linguistic behaviors, i.e. observable uses to which speakers put terms. It seems that our thoughts and perceptual experiences play a role, too. For example, what I mean when I say “apple” isn’t fixed entirely by the set of items that cause me to nod “yes” when you show them to me and say “apple”. I have perceptual experiences of apples—I see them—and when I say “apple”, I mean to refer to the botanical type instanced by those things I see. So long as we have fairly determinate cognitive and experiential content, we can probably recover determinate linguistic content.

Still, Quine’s argument isn’t the only one for LMS. I think the most compelling case for LMS comes from the simple observation that we’re very bad at explaining what we mean when we use words. Socrates was famous for doing this: he would ask random people on the street (literally) to define simple terms like “justice”, “good”, or “love”. They would offer a definition, and with a few questions Socrates would demonstrate that the proffered definition couldn’t even be right by that person’s own lights. Most notably, Socrates would show that the person was using the term in inconsistent ways, thereby demonstrating that there did not exist any single interpretation that could be what they meant by it.

Socrates’ “elenchus” (as it’s called) isn’t just a parlor trick or bit of fiction. It actually is very easy, with some reflection, to show that people (including yourself) use even everyday terms in ways that don’t allow for consistent interpretation. This filters up even into my own specialized research in philosophy and cognitive science. I (and other philosophers of perception) use specialized jargon like “phenomenal consciousness” and “mental state” as if we all know what we mean, but attempts to actually explain these terms in informative or consistent ways are always failures. (Psychologists are just as bad; for example, the term “percept” is ubiquitous in perceptual psychology literature, but it’s very clear psychologists don’t know what they mean when they use it.) While I suspect that highly operationalized terms (i.e., terms the meaning of which derives from well-defined measurement procedures, like “mass”) fare better, science, politics, business, and life in general are filled with terms that are all but impossible to define in any consistent and informative way.

So, it just turns out that a lot of the terms we use, and the sentences we build out of them, lack informative, consistent meanings. We are really bad at explaining what we mean. It’s doubtful that this problem extends to every bit of language, but it’s pervasive enough to call into question the meaningfulness of a huge chunks of what we say. So although this argument doesn’t support total LMS, it’s still surprising that a large percent of what we say is meaningless noise. After all, in the moment, as we speak or type, it doesn’t feel as if we’re uttering nonsense.

Thought and experience

What about our thoughts and experiences? Take thoughts first. Ironically, and for what it’s worth, “thought” (or “thoughts”) itself is a word that doesn’t have any single consistent interpretation. So, LMS threatens to derail our discussion, but let’s ignore the difficulty. We can get on with examples. Examples of thinking always involve thinking through some recognizable medium. For example, many (most?) people often think in so-called silent speech. As you read these words, you may be “saying them to yourself”, and similarly, you presumably often “talk to yourself” as you think. Alternatively, we often think through sensory imagination, e.g. you might visualize in your “mind’s eye” what you ate for breakfast, or use your “mind’s eye” to visualize the rotation of a shape. It’s hard to find any propritary “language of thought”, or really any specifiable examples of thinking that aren’t exhausted by thinking through some recognizable medium. Of course, it’s clear our brains must be doing some sort of information processing that leads to the production of that silent speech or sensory imagery, but this processing is subpersonal, i.e., an attribute of the underlying component machinery producing your thoughts; it’s not something attributable to you, the agent doing the thinking. (Compare: photoreceptors in my retina encode distributions of light intensity, but I don’t see those encoded light-intensity distributions; I see the resulting scene extracted from those distributions.)

The upshot is that if thoughts are always mediated by either internalized linguistic utterances, or sensory imagery, then the content of thought will depend on the content of that internalized language and imagery. So, the first consequence is that if we think in (internalized) language, and our language is (as our limited LMS suggests) often meaningless, the thoughts we work out through that language will also often be meaningless.

Maybe sensory imagery helps. This case is connected to perceptual experience itself, so we may as well take them together. At first it may seem obvious that we experience (or sensorily imagine) determinate content. For example, as I look at the sofa in front of me, my experience presents to me a sofa, and there’s nothing indeterminate about what I see. Similarly, if I visually imagine a sofa, that I’m imagining a sofa is perfectly clear. If that’s right, then forms of thought mediated by experiences and sensory imagery will have determinate content. This thought won’t be empty.

But there is at least some reason to think that perceptual experience and related sensory imagery likewise fail to have determinate content. Let’s start with an example from Bill Brewer. Brewer asks us to consider the famous Müller-Lyer illusion. How exactly does the world look when you look at Müller-Lyer lines? What is the content of your visual experience? Is there a single, consistent content we can ascribe to your visual experience? Brewer suggests not, since, on the one hand, it looks to you as if the two lines are unequal in length, but (on the other hand), it also looks to you as if the end points of the two lines are precisely where they actually are. Thus, your visual experience presents to you contradictory appearances: the lines look unequal in length, but their endpoints look to be where they actually are (equal distances apart). Hence, there is no consistent, single interpretation which can ascribe to your visual experience, which (of course) is precisely the problem had before with language. Although Brewer focuses on a few specific cases, it’s evidently easy for our minds to generate impossible imagery that admits of no interpretation. Charles Travis uses a slightly different approach, more in the spirit of Quine’s argument, and argues that the content of perceptual experiences is always underdetermined. For example, a blue object under white light looks the same as a white object under blue light.

So there’s reason to worry that perceptual experience and sensory imagery are, like language, (often? always?) empty. They admit of no single, determinate interpretation. If that’s the case, then thought more broadly is (often? always?) empty, since it’s always mediated or realized through language or sensory imagery.

Objections

Perhaps it might be objected that, despite initial appearances, there is a layer of thought (attributable to you, not just to your subpersonal processes) that isn’t mediated by language or sensory imagery. This layer of thought is what you express via language or imagery. When people say they have an idea, but can’t quite put it into words, they are speaking about this layer of thought. I’m not convinced we should take this seriously as a layer of thought. When people say they “have an idea they can’t put into words”, this just as well could mean that they have some diffuse feeling of having something to say that they just can’t get out; it doesn’t necessarily mean there is some well-formed, content-bearing thought that they have and grasp which they just can’t express. We need not assume that “having a thought you can’t put into words” is like wanting to say something in a new language you don’t fully know yet; instead, it might just be a contentless feeling of frustration at an inability to form any thought at all. Even if we grant this objection, it’s unclear how much it really helps. After all, what good are thoughts we can’t even articulate to ourselves via some medium we understand?

Another objection is that all I’ve shown, at best, is that much of our language, thought, and experience lacks a referent, or has multiple meanings, not that it’s empty (i.e., devoid of content). Philosophers like to distinguish between the referent of a representation (e.g., the actual individual person referred to by a name) and its meaning, which is harder to explain, but is something like the way the representation presents that referent. For example, “Super Man” and “Clark Kent” refer to the same (imaginary) person, but have different meanings. Someone could know the one name without knowing the other. You might object that showing that language, thought, and experience often have no consistent interpretation, or that they admit of multiple interpretations, at best shows that language, thought, and experience have no single, determinate referent. Still, they might have meaning, if perhaps multiple meanings.

There are a three responses to this objection. First, although (like much of language) talk of “meanings” vs “referents” seems to make perfect sense at first, I assure you that if we press on the distinction a bit it will no longer be clear that there even is any coherent distinction between meaning and referent. Second, you might (plausibly) think that having a meaning requires having some possible referent. A meaning, at bottom, is how a bit of language (or thought, or experience) represents the world to be—and so having a meaning seems to require there being some way the world is represented as being. Third, even if the objection works, what we’re left with is still rather surprising. After all, we take ourselves to usually be talking about, thinking about, and experiencing the world itself. If language, thought, and experience often lack referents (but have meaning), then we’re often failing to make contact with the world—we’re not actually talking about, thinking about, or experiencing the world itself. We’re stuck in a private (and indeterminate) linguistic and mental realm. A messy, encapsulated mind isn’t much better than an empty mind.

Are minds empty?

Now, I’m not entirely convinced by these arguments. I’m not convinced Brewer and Travis are correct that perceptual experience lacks content. Still, the overall picture is compelling. Much of the time our apparent thoughts do really seem to be empty: they are thoughts articulated, or realized in, bits of meaningless language or sensory imagery. Perhaps even our perceptual experiences are similarly empty of determinate content. To some large extent our minds are empty, and we don’t notice it. Just as we think we’re saying something meaningful when we’re really vocalizing empty noise, we’re often thinking thoughts that lack content. These thoughts say nothing about the world. They’re not even wrong, not even truth-evaluable. And that’s just really weird, since we don’t notice that so much of what’s passing before (or through) our minds is empty. We might as well just be babbling nonsense. There is a positive upshot: the lesson we should learn is that meaning is something to be achieved, something to be constructed. We can’t take for granted that just because we use common words or imagery that we manage to say something. If we want our thoughts to make contact with the world, to refer to things, to deploy concepts that capture real phenomena, then we must work hard to deploy words and images which actually do have consistent interpretations and do have real meaning.

My first exposure to this idea of empty thoughts was from Gareth Evans, who discusses how failed demonstrative thought is empty.