John Heil

Philosophical Behaviorism by John Heil

Until the twentieth century, the study of mind was assumed to revolve around the study of conscious states and processes. Subjects in psychological experiments (very often the experimenters themselves or their students) were trained to introspect, and report on features of their conscious experiences. In this milieu, mental imagery and subtle qualities of sensory episodes had a central place.

At the same time, psychologists were concerned to integrate the study of the mind with the study of the brain. It had long been evident that occurrences in the brain and nervous system were intimately related to mental goings-on. The difficulty was to understand precisely the nature of the relation between minds and brains. It is tempting to think that minds (or selves: I shall continue to use the terms interchangeably, without intending to suggest that they are synonymous) are nothing more than brains. Properties of brains, however, seem to differ importantly from properties of minds. When you undergo a conscious experience, you are vividly aware of characteristics of that experience. When we examine a living brain, the characteristics we observe appear to be utterly different. Think of what it is like to have a headache. Now imagine that you are able to peer at the brain of someone suffering a headache. What you observe, even aided by instruments that reveal the fine structure of the brain, is altogether different from what the headache victim feels. Imagine a neuroscientist, intimately familiar with the physiology of headache, but who has never experienced a headache. There is, it would seem, something the scientist lacks knowledge of, some characteristic the scientist has not encountered and could not encounter simply by inspecting the brain. But then this characteristic would appear not to be a neurological characteristic. When we look at the matter this way, it is hard to avoid concluding that mental characteristics are not brain characteristics, and thus that minds are not brains.

If this were not enough, we would do well to remind ourselves that we evidently enjoy a kind of access to our conscious experiences that others could never have. Your experiences are private. Your awareness of them is direct and authoritative; my awareness of those same experiences is, in contrast, indirect, inferential, and easily overridden. When you have a headache, form an image of your grandmother, or decide to comb your hair, you are in a position to recognize immediately, without the benefit of evidence or observation, that you have a headache, that you are imagining your grandmother, or that you have decided to comb your hair. I can only infer your state of mind by observing your behavior (including your linguistic behavior: I can interrogate you). If mental goings-on are correlated with neurological processes, then I may be able to infer your state of mind by observing your brain. But my access to that state is still indirect. I infer your state of mind by observing a neurological correlate. I do not observe your state of mind.

All this is exactly what we should expect were dualism true. But dualism, or at any rate Cartesian dualism, apparently leads to a bifurcation of the study of intelligent agents. We can study the biology and physiology of such agents, but in so doing we ignore their minds; or we can study their minds, ignoring their material composition. Now, however, we are faced with a difficulty. Science is limited to the pursuit of objective, public states of affairs. An objective state of affairs can be apprehended from more than one perspective, by more than one observer. The contents of your mind, however, are observable (if that is the word) only by you. My route to those contents is through observations of what you say and do. This appears to place minds outside the realm of scientific inquiry. We can study brains, and we may conclude that particular kinds of neurological goings-on are correlated with kinds of mental goings-on. This would enable us reliably to infer states of mind by observing brain activity. But we should not be observing or measuring those states of mind themselves, except in our own case.

Privacy and its consequences

Once we start down this road, we may come to doubt that states of mind as distinct from their physiological correlates are a fit subject for scientific examination. Eventually, the very idea that we are in a position even to establish correlations between mental occurrences and goings-on in the nervous system can come to be doubted. Imagine that, every time you have a particular kind of experience – every time you see a certain shade of red, for instance, the red of a ripe tomato - your brain goes into a particular state, S. Further, whenever your brain goes into state S, you experience that very shade of red. It looks as though there must be a correlation between experiences of this kind and neurological states of kind S.

Suppose, now, you observe my brain in state S. I announce that I am experiencing a certain shade of red, a shade I describe as the red of a ripe tomato. It might seem that this provides further evidence of the correlation already observed in your own case. But does it? In your own case, you have access both to the mental state and to its neurological correlate. When you observe me, however, you have access only to my neurological condition. What gives you the right to assume that my mental state resembles yours?

True, I describe my experience just as you describe yours. We agree that we are experiencing the color of ripe tomatoes. But of course this is how we have each been taught to characterize our respective experiences. I have a particular kind of visual experience when I view a ripe tomato in bright sunlight. I describe this experience as the kind of experience I have when I view a ripe tomato in bright sunlight. You have a particular kind of experience when you view a ripe tomato under similar observational conditions. And you have learned to describe this experience as the kind of experience you have when you view a ripe tomato in bright sunlight. But what entitles either of us to say that the experiences so described are exactly similar? Perhaps the experience you have is like the experience I would have were I to view a lime in bright sunlight. Our descriptions perfectly coincide, but the state of mind I am describing is qualitatively very different from yours.

It would seem, then, that attempts to correlate kinds of neurological goings-on and kinds of mental occurrences boil down to correlations of neurological goings-on and descriptions of mental occurrences. We learn to describe the qualities of our states of mind by reference to publicly observable objects that typically evoke them. And this leaves open the possibility that, while our descriptions match, the states to which they apply are wildly different.

This may seem an idle worry, a purely philosophical possibility. But ask yourself: what earthly reason do you have for thinking that your states of mind qualitatively resemble the states of mind of others? It is not as though you have observed others states of mind and discovered they match yours. You lack a single example of such a match. Might you infer inductively from characteristics of your own case to the characteristics of others? (Inductive inference is probabilistic: we reason from the characteristics of a sample of a population to characteristics of the population as a whole.) But canons of inductive reasoning proscribe inferences from a single individual to a whole population unless it is clear that the individual is representative of the population. If you assume that characteristics of your states of mind are representative, however, you are assuming precisely what you set out to establish.

The problem we have been scouting is the old problem of other minds. Granted youcan know your own mind, how can you know the minds of others? Indeed, once we put it this way, we can see that the problem is deeper than we might have expected. How can you know that others have minds at all? They behave in ways similar to the ways you behave, and they insist they have pains, images, feelings, and thoughts. But what reason do you have for supposing that they do? You cannot observe others states of mind. Nor do you have adequate inductive grounds for inferring that they enjoy a mental life from what you can observe about them.

A recent twist on this ancient puzzle introduces the possibility of zombies, creatures identical to us in every material respect, but altogether lacking conscious experiences. The apparent conceivability of zombies has convinced some philosophers that there is an unbridgeable explanatory gap between material qualities and the qualities of conscious experience. You may be growing impatient with this line of reasoning. Of course we know that others have mental lives similar to ours in many ways and different as well: it is also possible to know that. Well and good. But it is hard to see how this confidence could be justified so long as we accept the notion that minds and their contents are private affairs, incapable of public scrutiny.

The beetle in the box

Perhaps our starting point is what is responsible for our predicament. We have been led down the garden path by a certain conception of mind inherited from Descartes. If we begin to question that conception, we may see our way clear to a solution to our problem, one that better fits our commonsense idea that we can know that others have minds and that their minds resemble ours.

Wittgenstein (1889-1951), in his Philosophical Investigations (1953/ 1968), § 293, offers a compelling analogy:

Suppose everyone had a box with something in it: we call it a “beetle”. No one can look into anyone else’s box, and everyone says he knows what a beetle is only by looking at his beetle. Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing.

The picture here resembles the picture of the relation we bear to our own and others’ states of mind that we have been taking for granted. Wittgenstein argues against this picture, not by presenting considerations that imply its falsity, but by showing that our accepting it leads to a paradoxical result: if this is the relation we bear to our own and others’ states of mind, then we should have no way of referring to them.

Suppose the word “beetle” had a use in these people’s language? If so it would not be used as the name of a thing. The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. No, one can “divide through” by the thing in the box; it cancels out, whatever it is. That is to say: if we construe the grammar of the expression of sensation on the model of “object and designation” the object drops out of consideration as irrelevant.

What is Wittgenstein’s point? You report that your box contains a beetle. Your report is perfectly apt. You have been taught to use the word “beetle” in just this way. Imagine, now, that the object in my box is very different from the object in your box. If we could compare the objects, this would be obvious, although we could never be in a position to compare them. Suppose now that I report that my box contains a beetle. In so doing, I am using the word “beetle” exactly as I have been taught to use it. My utterance, like yours, is perfectly correct.

Suppose, now, we each report, say, that our respective boxes contain a beetle. Is either of us mistaken? No. In the imagined situation, Wittgenstein argues, the word “beetle” is used in such a way that it makes no difference what is inside anyone’s box. “Beetle”, in our imagined dialect, means, roughly, whatever is in the box.” To wonder whether your beetle resembles my beetle is to misunderstand this use of “beetle.” It is to treat “beetle” as though it named or designated a kind of object or entity. But “beetle” is used in such a way that “the object drops out of consideration as irrelevant.”

Wittgenstein’s point is not merely a linguistic one. Any thoughts we might harbor that we would express using the word “beetle,” are similarly constrained. Those thoughts turn out not to concern some particular kind of entity. Differently put: if the word “beetle” does not refer to entities of a particular sort, then neither do thoughts naturally expressible using “beetle.”

Philosophical behaviorism

How might the analogy be extended to states of mind? As a child, you react in various ways to your surroundings. On some occasions, you moan and rub your head. Adults tell you that what you have is called a headache. Others are taught to use “headache” similarly. Does “headache” designate a kind of entity or state? Perhaps not. Perhaps when you tell me that you have a headache, you are not picking out any definite thing or private condition at all (think of the beetle), but merely evincing your headache. You have been trained in a particular way. When you are moved to moan and rub your head, you are, as a result of this training, moved as well to utter the words “I have a headache.”

When you ascribe a headache to me, you are saying no more than that I am in a kind of state that leads me to moan, rub my head, or utter “I have a headache.” The private character of that state could differ across individuals. It might continually change, or even, in some cases (zombies?), be altogether absent. The function of the word “headache” is not to designate that private character, however. It “drops out of consideration as irrelevant.”

Suppose that this account of our use of “headache” applied to our mental vocabulary generally. Then mental terms would not in fact be used to designate kinds of entity or qualitatively similar private episodes as Descartes would have it. Their role is quite different. And in that case, the question whether the state you designate by “experience I have when I view a ripe tomato in bright sunlight” qualitatively matches the state I designate when I use the same expression could not so much as arise. To raise the question is to mischaracterize the use of mental terminology, and thus to utter nonsense.

This line of reasoning supports what is often dubbed philosophical behaviorism. (It is dubbed thus by its opponents. Few philosophers routinely so characterized have applied the label to themselves.) The philosophical behaviorist holds that the

Cartesian conception of mind errs in a fundamental way. Minds are not entities (whether Cartesian substances or brains); and mental episodes are not private goings-on inside such entities. We are attracted to the Cartesian picture only because we are misled by what Wittgenstein calls the grammar of our language.

So long as we deploy our language in everyday life we steer clear of philosophical puzzles. Words owe their significance to the “language games” we play with them. An appropriate understanding of any word (hence the concept it expresses) requires a grasp of the part or parts it plays in these language games. When we engage in philosophy, however, we are apt to be misled by the fact that “mind,” like “brain,” or “baseball,” is a substantive noun. We reason that “mind” must designate a kind of entity, and that what we call thoughts, sensations, and feelings refer to qualitatively similar states or modes of this entity. We can avoid confusion only by looking carefully at the way our words are actually deployed in ordinary circumstances. This prescription is intended by Wittgenstein to apply to philosophy generally. Philosophical problems arise “when language goes on holiday, “when we lose touch with the way our words are actually used. In our everyday interactions with one another, we are not puzzled by our capacity to know how others feel or what they are thinking. The philosophical problem of other minds arises when we wrench “mind,” “thought,” “feeling,” and their cognates from the contexts in which they are naturally deployed, put a special interpretation on them, and then boggle at the puzzles that result.

Gilbert Ryle (1900-76) extends Wittgenstein’s point. According to Ryle, the supposition that minds are kinds of entity amounts to a “category mistake”: “it represents the facts of mental life as if they belonged to one logical type or category ...when actually they belong to another” (1949, p. 16).

Suppose I show you around my university. We stroll through the grounds; I show you various academic and administrative buildings; I take you to the library; I introduce you to students and members of the faculty. When I am done, I ask whether there is anything else you would like to see. You reply: “Yes. You’ve shown me the grounds, the academic and administrative buildings, the library, students, and faculty; but you haven’t shown me the university. I’d like to see that.” You have made a category mistake. You have taken the term “university” to designate an entity similar to, but distinct from, those you have seen already. If you persisted in the belief that “university” designates such an entity despite failing ever to encounter it, you might come to imagine that the entity in question is “non-material.”

An analogous mistake, says Ryle, encourages Cartesian dualism. We begin with the idea that minds are entities, distinct from, but similar to brains or bodies. When we have trouble locating such entities in the material world, we assume that they must be non-material. We see the mind, to use Ryle’s colorful phrase, as the ghost in the machine. But minds are not entities at all, ghostly or otherwise, a fact we should immediately appreciate if only we kept firmly before us the way “mind” functions in ordinary English.