intentional stance, the
 
 
A strategy, proposed and defended by Daniel Dennett, for understanding an entity's behavior. When adopting the intentional stance towards an entity, we attempt to explain and predict its behavior by treating it as if it were a rational agent whose actions are governed by its beliefs and desires. The intentional stance contrasts with two other strategies, the physical stance and the design stance. See also Dennett, Daniel, intentionality.
 

Details:
 
Introduction
 
According to Daniel Dennett, there are three different strategies that we might use when confronted with objects or systems: the physical stance, the design stance, and the intentional stance. Each of these strategies is predictive. We use them to predict and thereby to explain the behavior of the entity in question. (‘Behavior’ here is meant in a very broad sense, such that the movement of an inanimate object—e.g., the turning of a windmill—counts as behavior.) Since the intentional stance is best understood by contrast with the physical and the design stance, these other two stances will be discussed first. 
 
 
The Physical Stance and the Design Stance
 
The physical stance stems from the perspective of the physical sciences. To predict the behavior of a given entity according to the physical stance, we use information about its physical constitution in conjunction with information about the laws of physics. Suppose I am holding a piece of chalk in my hand and I predict that it will fall to the floor when I release it. This prediction relies on (i) the fact that the piece of chalk has mass and weight; and (ii) the law of gravity. Predictions and explanations based on the physical stance are exceedingly common. Consider the explanations of why water freezes at 32 degrees Fahrenheit, how mountain ranges are formed, or when high tide will occur. All of these explanations proceed by way of the physical stance.
 
When we make a prediction from the design stance, we assume that the entity in question has been designed in a certain way, and we predict that the entity will thus behave as designed. Like physical stance predictions, design stance predictions are commonplace. When in the evening a student sets her alarm clock for 8:30 a.m., she predicts that it will behave as designed: i.e., that it will buzz at 8:30 the next morning. She does not need to know anything about the physical constitution of the alarm clock in order to make this prediction. There is no need, for example, for her to take it apart and weigh its parts and measure the tautness of various springs. Likewise, when someone steps into an elevator and pushes "7," she predicts that the elevator will take her to the seventh floor. Again, she does not need to know any details about the inner workings of the elevator in order to make this prediction.
 
Design stance predictions are riskier than physical stance predictions. Predictions made from the design stance rest on at least two assumptions: first, that the entity in question is designed as it is assumed to be; and second, the entity will perform as it is designed without malfunctioning. The added risk almost always proves worthwhile, however. When we are dealing with a thing that is the product of design, predictions from the design stance can be made with considerably more ease than the comparable predictions from the physical stance. If the student were to take the physical stance towards the alarm clock in an attempt to predict whether it will buzz at 8:30 a.m., she would have to know an extraordinary amount about the alarm clock’s physical construction.
 
This point can be illustrated even more dramatically by considering a complicated designed object, like a car or a computer. Every time you drive a car you predict that the engine will start when you turn the key, and presumably you make this prediction from the design stance—that is, you predict that the engine will start when you turn the key because that it is how the car has been designed to function. Likewise, you predict that the computer will start up when you press the "on" button because that it is how the computer has been designed to function. Think of how much you would have to know about the inner workings of cars and computers in order to make these predictions from the physical stance!
 
The fact that an object is designed, however, does not mean that we cannot apply the physical stance to it. We can, and in fact, we sometimes should. For example, to predict what the alarm clock will do when knocked off the nightstand onto the floor, it would be perfectly appropriate to adopt the physical stance towards it. Likewise, we would rightly adopt the physical stance towards the alarm clock to predict its behavior in the case of a design malfunction. Nonetheless, in most cases, when we are dealing with a designed object, adopting the physical stance would hardly be worth the effort. As Dennett states, "Design-stance prediction, when applicable, is a low-cost, low-risk shortcut, enabling me to finesse the tedious application of my limited knowledge of physics." (Dennett 1996)
 
The sorts of entities so far discussed in relation to design-stance predictions have been artifacts, but the design stance also works well when it comes to living things and their parts. For example, even without any understanding of the biology and chemistry underlying anatomy we can nonetheless predict that a heart will pump blood throughout the body of a living thing. The adoption of the design stance supports this prediction; that is what hearts are supposed to do (i.e., what nature has "designed" them to do).
 
 
The Intentional Stance
 
As already noted, we often gain predictive power when moving from the physical stance to the design stance. Often, we can improve our predictions yet further by adopting the intentional stance. When making predictions from this stance, we interpret the behavior of the entity in question by treating it as a rational agent whose behavior is governed by intentional states. (Intentional states are mental states such as beliefs and desires which have the property of "aboutness," that is, they are about, or directed at, objects or states of affairs in the world. See intentionality.) We can view the adoption of the intentional stance as a four-step process. (1) Decide to treat a certain object X as a rational agent. (2) Determine what beliefs X ought to have, given its place and purpose in the world. For example, if is X standing with his eyes open facing a red barn, he ought to believe something like, "There is a red barn in front of me." This suggests that we can determine at least some of the beliefs that X ought to have on the basis of its sensory apparatus and the sensory exposure that it has had. Dennett (1981) suggests the following general rule as a starting point: "attribute as beliefs all the truths relevant to the system’s interests (or desires) that the system’s experience to date has made available." (3) Using similar considerations, determine what desires X ought to have. Again, some basic rules function as starting points: "attribute desires for those things a system believes to be good for it," and ""attribute desires for those things a system believes to be best means to other ends it desires." (Dennett 1981) (4) Finally, on the assumption that X will act to satisfy some of its desires in light of its beliefs, predict what X will do.
 
Just as the design stance is riskier than the physical stance, the intentional stance is riskier than the design stance. (In some respects, the intentional stance is a subspecies of the design stance, one in which we view the designed object as a rational agent. Rational agents, we might say, are those designed to act rationally.) Despite the risks, however, the intentional stance provides us with useful gains of predictive power. When it comes to certain complicated artifacts and living things, in fact, the predictive success afforded to us by the intentional stance makes it practically indispensable. Dennett likes to use the example of a chess-playing computer to make this point. We can view such a machine in several different ways:
  • as a physical system operating according to the laws of physics;
  • as a designed mechanism consisting of parts with specific functions that interact to produce certain characteristic behavior; or
  • as an intentional system acting rationally relative to a certain set of beliefs and goals
Given that our goal is to predict and explain a given entity’s behavior, we should adopt the stance that will best allow us to do so. With this in mind, it becomes clear that adopting the intentional stance is for most purposes the most efficient and powerful way (if not the only way) to predict and explain what a well designed chess-playing computer will do. There are probably hundreds of different computer programs that can be run on a PC in order to convert it into a chess player. Though the computers capable of running these programs have different physical constitutions, and though the programs themselves may be designed in very different ways, the behavior of a computer running such a program can be successfully explained if we think of it as a rational agent who knows how to play chess and who wants to checkmate its opponent’s king. When we take the intentional stance towards the chess-playing computer, we do not have to worry about the details of its physical constitution or the details of its program (i.e., its design). Rather, all we have to do is determine the best legal move that can be made given the current state of the game board. Once we treat the computer as a rational agent with beliefs about the rules and strategies of chess and the locations of the pieces on the game board, plus the desire to win, it follows that the computer will make the best move available to it.
 
Of course, the intentional stance will not always be useful in explaining the behavior of the chess-playing computer. If the computer suddenly started behaving in a manner inconsistent with something a reasonable chess player would do, we might have to adopt the design stance. In other words, we might have to look at the particular chess-playing algorithm implemented by the computer in order to predict what it will subsequently do. And in cases of more extreme malfunction—for example, if the computer screen were suddenly to go blank and the system were to freeze up—we would have to revert to thinking of it as a physical object to explain its behavior adequately. Usually, however, we can best predict what move the computer is going to make by adopting the intentional stance towards it. We do not come up with our prediction by considering the laws of physics or the design of the computer, but rather, by considering the reasons there are in favor of the various available moves. Making an idealized assumption of optimal rationality, we predict that the computer will do what it rationally ought to do.
 
 
The Intentional Stance, Realism, and Instrumentalism
 
In his writings on the intentional stance, Dennett has often made the controversial further claim that the intentionality of a creature wholly consists in its behavior being well-predicted by our adoption of the intentional stance towards it: "all there is to being a true believer is being a system whose behaviour is reliably predictable via the intentional strategy, and hence all there is to really and truly believing that p (for any proposition p) is being an intentional system for which p occurs as a belief in the best (most predictive) interpretation." (Dennett 1981) Interestingly, however, Dennett claims that his view should be considered a sort of realism about the mind. As he himself notes, this requires a "delicate balancing act on the matter of the observer-relativity of attributions of belief and other intentional states." (Dennett 1987)
 
Typically, a realist about the mental treats beliefs and desires as inner states of a system that cause that system’s behavior. In contrast, an instrumentalist treats beliefs and desires as theoretical posits which we ascribe to various systems when doing so is instrumental to understanding that system’s behavior. These posits, however useful they might be to us, are nonetheless fictions, and thus our ascriptions of beliefs and desires are strictly speaking false according to the instrumentalist.
 
Given Dennett’s suggestion that we should understand beliefs on the model of abstract objects like centers of gravity, he has often been classified as an instrumentalist. But Dennett, who rejects the usual either-or dichotomy of realism and instrumentalism, prefers to classify his view as an in between position that he calls interpretationism. According to interpretationism, whether a system has a certain belief or desire depends on our imposing a certain interpretation on the system. A statement ascribing a certain belief or desire to an organism is true when the best overall interpretation of that system’s behavior says that the organism has that belief or desire. From the intentional stance, we detect certain patterns that, although partly constituted by our own reactions to them, are objective. But because these real patterns are not wholly determinate, the possibility of interpretive gaps will always remain. Due to such gaps, "there could be two different systems of belief attribution to an individual that differed substantially in what they attributed—even in yielding substantially different predictions of the individual’s future behavior—and yet where no deeper fact of the matter could establish that one was a description of the individual’s real beliefs and the other not." (Dennett 1991)
 
Though interpretationism clearly rejects the "inner-state" view of intentional states that is usually associated with realism, it also rejects the instrumentalist characterization of such states as mere fictions. The patterns detectable by our adoption of the intentional stance are, according to Dennett, real patterns. Beliefs, though they can only be detected once we take the intentional stance towards the believer, are nonetheless objective phenomena. Thus, he considers his view to be a form of realism, albeit a "soft" or "intermediate" one.
 
 
Amy Kind