Trust in Game Theory

Daniel M. Hausman

No doubt men are capable even now of much more unselfish service than they generally render; and the supreme aim of the economist is to discover how this latent social asset can be developed more quickly and turned to account more wisely. (Alfred Marshall, Principles of Economics, p. 8)

The truster sees in his own vulnerability the instrument whereby a trust relationship may be created (Luhmann 1979, p. 43).

People are not always motivated by material self-interest. Although there are sanctions attached to not paying one's taxes, riding public transportation without paying the fare, or ignoring the crying infant one is supposed to be watching, and there are rewards attached to good performance, people don't cheat nearly as much as they should if they were acting in pursuit of selfish material interests. To explain this fact (and it is a fact), one has to recognize that other things besides material self-interest motivate people. Some of these other motivations are altruistic. For example, there are babysitters who care for the children they are entrusted with. Some of the other motivations are not altruistic. People might pay their fares on the Tube because of the shame they would feel if they were caught not doing so.[1]

In this talk I shall be concerned with a particular mechanism that motivates people. In a recent essay, Philip Pettit discusses the fact that manifesting one's trust in someone can motivate that person to do what one is trusting them to do. Call this "the trust mechanism." P's making manifest to Q and to others that P expects Q to do X gives Q a reason to do X. Pettit speaks of "the motivating efficacy of manifest reliance" (1995, p. 208). Sometimes this reason appeals to Q's material self-interest. The announcement "I'm counting on you to solve this problem" made by a mafia leader to a flunky provides the flunky with a materially self-interested reason to solve the problem. Some philosophers and sociologists including Pettit himself would question whether this is an instance of trust(, but I shall not join that argument, because however such cases are classified, they are not the cases I am interested in here. I am concerned with those cases in which by virtue of P's making manifest that P trusts Q to do X, P provides Q with a reason to do X that is additional to or independent of any material sanctions. The fact that my wife expects me to pick up some groceries on the way home provides me with a reason to pick up the groceries even if I am unlikely to be beaten, starved, or berated if I fail to do so.

If Q is loyal or virtuous and the trust that is placed in Q by P is not unreasonable or presumptuous, then Q will have a motive to do what P trusts Q to do, which may complement or compete with Q's material self-interest. For virtuous people like us, knowing that someone is trusting us to act in a particular way can give us a reason to act in that way. Why is this? It certainly seems peculiar. There is much to be said in answer. Sometimes accepting a trust is like making a promise and the reasons for fulfilling the trust are the same as the reasons for keeping a promise. Sometimes fulfilling a trust is demanded by benevolence or loyalty to those who are so related to us that they had reason to place their trust in us. Fulfilling a trust may also be prompted by gratitude, for the person who trusts us sometimes makes us a gift of their good opinion of us, even as they burden us. There is a great deal to be said here, but the question of why trustworthiness is a virtue is not my concern today.

Making known one's trust can also motivate people who are not virtuous. Failing to do what one is trusted to do can damage one's reputation, or, stated less negatively, one can acquire a reputation for loyalty or virtue by fulfilling a trust and thereby mimicking loyalty and virtue.[2 Not only can people who are not virtuous acquire good reputations by doing what they have been trusted to do, but, self-deceivers that people are, they can purchase a good opinion of themselves this way, too. The fact that one is mimicking the acts of those who are virtuous and loyal is essential, because there is nothing particularly flattering in being known to seek the esteem of others. Since mimickry can only succeed if there is something to mimic, this mechanism is parasitic on the existence of some loyal and virtuous people -- or, more accurately, on the shared belief in their existence. Also presupposed is shared admiration of trustworthy behavior. This mechanism will not work in a society in which people find trustworthiness contemptible compared to ruthless self-promotion. Even though making manifest one's trust in others gives them in this way self-interested reason to do what they are trusted to do, the self-interest here is different from the material self-interest that dominates economic models.[3]

During the past two decades quite a lot of work has been done on "trust", and philosophical interest seems recently to have surged. One thing that becomes obvious when one goes through this literature is that the word, "trust," means many different things. Depending on whom one reads, trust is an emotion, an environment, a set of beliefs, an encapsulation of self-interest or a counterweight to self-interest. Obviously the operation of the trust mechanism sketched above requires and contributes to an environment of trust, and it affects people's emotions and beliefs. My focus is, however, on the mechanism as a source of motivation that is not materially self-interested.

There are many important questions about the trust mechanism that Pettit does not answer. In particular one wonders how strong are the motivations resulting from the trust mechanism. Although Pettit has some extremely perceptive things to say about the circumstances in which the trust mechanism will flourish, there is still more to be investigated. To what extent can public policy employ this mechanism or facilitate the private employment of this mechanism by individuals? How dependent is this mechanism on a general concern with one's reputation?

These questions do not arise from mere curiosity. A society in which the trust mechanism is strong is one in which individuals are trusting and trustworthy. Such a society has great normative attractions. We want to feel that our security is guaranteed by the benevolence, conscientiousness or reciprocity of others, not merely by their fear of legal sanctions (Becker 1996, p. 54), and given the bluntness and cost of legal sanctions, it will be easier to achieve cooperation if we are confident of each other's good will and trustworthiness. The normative attractions of a society that relies heavily on the trust mechanism are both intrinsic, since trustworthy behavior at least mimics virtuous behavior, and instrumental, since trust is a wonderful social lubricant and its absence is costly. Consider the costs attached to the collapse of the relatively trivial institution of hitch-hiking.

An environment in which it is easy to trust and in which people are strongly motivated to do what they are trusted to do is not always an unmixed blessing. Trust has a dark side, since high levels of trust within a social group are often purchased at the cost of fear and hatred of outsiders. Material self-interest looks pretty good when compared to rabid racism or nationalism. But fellow feeling, trustworthiness, and the possibility of being trusting are in themselves good things, even when they arise from xenophobia and have bad consequences; and a life governed by the pursuit of material self-interest is mean and unlovely. No doubt sanctions addressed to narrow self-interest will always be very powerful, but both private interactions and public policy can sometimes make use of other motivations (including the trust mechanism of concern to me here). When this is possible, there may be gains not only in short-run efficiency but also in nurturing attractive forms of life.[4]

A couple of paragraphs ago, I asked some difficult questions concerning the importance of the trust mechanism, the circumstances in which it will function well, and the possibilities of nurturing it. These questions demand hard thinking about institutional design. As Pettit has put it, "If we are not clear about the good reasons why people might trust one another, we are in danger of designing institutions that will reduce trust or even drive it out" (1995, p. 202). An environment in which people can readily trust one another is a public good, and rational individuals may underinvest in trust formation. My being trustworthy and placing trust in others brings benefits not only to me and to those who have placed their trust in me, but also to others (Putnam 1993, p. 170). The uncoordinated endeavors of individuals -- even individuals with some measure of altruism -- cannot be counted on to bring about the optimal circumstances for the placing and rewarding of trust. Furthermore, the circumstances that facilitate trusting are peculiar. Rather than being depleted through use, they improve as they are employed (Hirschman 1985, Gambetta 1988a), and they can be damaged by factors that in other ways increase social welfare (Coleman 1990). The less individuals have to call on one another for aid -- that is, the more they are able to take care of themselves or to pay for the assistance they need --, the fewer the opportunities to display trustworthiness. But trust rarely thrives in circumstances of destitution either. Policies to create an environment in which it is easy to trust others are difficult to devise and implement. Such an environment is often a byproduct of activities that call for little trust (Gambetta 1988a, p. 225). It is also not clear whether the trust mechanism is robust to large scale attempts to make use of it.

How can these questions be tackled? In this talk I want to explore how game theory might help.

1 Rational choice theory and material self-interest

Rational choice theory and game theory, like folk psychology, explain actions in terms of beliefs and preferences. Preferences are assumed to have at least the structure of a weak ordering, and preferences determine choices in the sense that individuals choose whatever feasible alternative they most prefer.

By themselves the axioms of ordinal or even expected utility theory place no limits on how people can choose. If beliefs and preferences change radically from second to second and there are no limits on what aspects of a choice are relevant, no behavior could violate the axioms. If I prefer $100 to $10 yet prefer a bet that pays off $10 if nobody falls asleep during this seminar to an otherwise identical bet that pays off $100 if nobody falls asleep, I may at that moment prefer less money to more, or I may falsely believe that ten is greater than one hundred, or I may have an aversion to winning money from gambling.

The theory begins to have some content if one supposes that beliefs and preferences are stable, so that my belief that one hundred is larger than ten, and my general preference for more money rather than less persist when I am asked my preferences between the two gambles, and if one supposes that my preference among the gambles does not depend on things such as the number of letters in the last name of the person asking me. Rational choice theory will still have little content without information about the beliefs and preferences of the agents whose behavior we seek to explain, predict, or evaluate. It would, however, be arduous and problematic to gather detailed information about the preferences of agents, and economists often simplify by assuming that individuals want more commodities and more money and that they don't care about anything else. I call game theory and rational choice theory that includes this assumption "vulgar."

Vulgarity is not the worst of sins, and sometimes it is called for. To assume that individuals want nothing besides more wealth or more commodities is a reasonable simplification for many purposes. It would, however, be absurd to be attached to it and to condemn on methodological grounds attempts to model human behavior that admit other motivations. Money and commodities are, after all, prototypical means: the whole point is to consume the goods and services we acquire.

Economists are often critical of sociological models that portray individuals as motivated by concerns for status, love, the welfare of others, a striving for meaning, and so forth on the ground that these motivational claims are sentimental falsehoods -- that in reality individuals are motivated by their material self-interest. Particular sociological models may, of course, romanticize human motivation, but it is perverse to base a general critique of sociological models on the premise that individuals are materially self-interested. Rational individuals are only materially self-interested insofar as more wealth and commodities serve their ultimate ends. Unless the (unlikely) case can be made that material self-interest only rarely fails to serve people's ultimate ends, it will sometimes be the case that people will sacrifice wealth for more fundamental goods such as enjoyment, emotional security, the happiness of their children, respect of others, self-respect, a sense of accomplishment, or whatever. The most one can defend is the view that given the range of ultimate goods that people value, it turns out in significant domains that they typically pursue their material self-interest. Such a view could not possibly ground a general critique of models that portray people as pursuing things other than material self-interest and sometimes sacrificing their material self-interest to those other things.

All this is obvious enough, but it is equally obvious that many economists are uncomfortable with models in which agents pursue self-respect, reputation, or altruistic ends. One good reason for this discomfort is that if one relaxes the constraints and allows agents in a rational choice or game theoretic model to care about many things, then the model may become empty. Consider, for example, a well-known anomaly for any rational choice model that supposes material self-interest, such as leaving a tip in a restaurant one never expects to return to. By supposing that individuals prefer to leave tips, one can make the anomaly disappear. But is the result a better theory or a worse one? One has eliminated the anomaly, but one has also lessened the content. It is not unreasonable to complain that such an "explanation" of tipping may be worse than no explanation at all. This is, however, an argument for constraints on how additional motives are admitted into one's models, not an argument against admitting any.

2 Game theory and individual motivation

The question of how to constrain individual motivations arises in a distinctive way in game theory. Suppose we bring subjects into the laboratory and ask them to play a simple one-shot prisoner's dilemma such as:

I will adopt the convention of treating player A as feminine and player B as masculine, and I shall say that a player who chooses C "cooperates" and that one who chooses D "defects." Strategies are, of course, complete specifications of how to act given what the opponent does, not single choices, but it would be pedantic to draw out the distinctions in detail.) If individuals care only about their own monetary payoffs, they both have a dominant strategy and both should defect. Yet experimental subjects often cooperate. Why? There are three kinds of explanation.

  1. Blunder. People may misread, misspeak, etc.
  2. Odd beliefs. Subjects who believe in magic or who accept fallacious arguments may believe that their cooperating makes it more likely that their opponent will cooperate too. Subjects might also envision future interactions with other subjects or with the experimenters and thus believe that they are playing a more complicated game with a more complicated structure than is represented here.
  3. Other motives. Subjects may care about other things in addition to their own monetary payoffs. They may be altruists. They may be governed by concerns about fairness. They may be concerned about what the experimenters will think of them.[5]

All of these explanations are sometimes correct, though, like most commentators, I suspect that the last explanation is the most important. It highlights the fact that one hasn't specified what game people are playing until one specifies the players' preferences and perspective. There is nothing in game theory itself that leads one to be vulgar about people's preferences. To the contrary, the attempt to explain what people actually do leads one to recognize a wide variation in people's motives.

Some would deny that economists need to think about motives and indeed that doing so is a scientific mistake. Motives are constructed out of choices and do not explain choices. Ken Binmore, for example, writes,

...let me repeat that it is regarded as a fallacy within modern utility theory to argue that action b is chosen instead of action a because the former yields a higher payoff....

Modern utility theory makes a tautology of the fact that action b will be chosen rather than a when the former yields a higher payoff by defining the payoff of b to be larger than the payoff of a if b is chosen when a is available. (1994, p. 169).

It seems that Binmore is contentiously identifying "modern utility theory" with revealed-preference theory. This is not the occasion to revisit the controversies concerning revealed-preference theory, but I shall comment briefly on its application to game theory. Suppose in the above game that Acooperates. Binmore would have us conclude that she is not playing a prisoner's dilemma game, because, if she were, she wouldn't cooperate. It is, in Binmore's view, a logical contradiction to say that someone played a strictly dominated strategy. The first point to notice is that by methodological fiat blunder has become impossible. It is consequently impossible to know what game people are playing until after they have made their choices.

A second and more devastating problem is that it is impossible even then to know what game has been played, because choice in fact fails to reveal preference. To make inferences about preferences from choices, one must hold some view concerning the agent's beliefs. B may cooperate because he prefers less wealth to more wealth, or he might have the opposite preferences and cooperate because he thinks A will cooperate if and only if he cooperates. How can one tell what B prefers and what game B is playing? To determine from the choice of C whether B prefers more wealth to less or less to more, one needs to know what B believes, and to determine from the choice what B believes, one needs to know what B prefers. A revealed-preference theorist could suppose that beliefs are somehow just "given", but it would be a peculiar empiricism that admits subjective states of belief, which explain choices, and sticks at subjective states of preference. If the individual's preference ranking of monetary payoffs were not a mental state, how could it translate into different actions given different beliefs?

The only consistent move for the revealed-preference theorist to eschew the explanation of choice altogether. One cannot ask why people play as they do without attributing to them subjective states of belief and preference. Of course not everybody is or should be interested in why people play as they do, but such questions should not be ruled out of economics. I don't think that Binmore himself wants to rule them out. Nor, by the way, is Binmore consistent in insisting that preference is constructed from choice. He is too sensible. When commenting on experimental results involving high levels of cooperation, he writes, "But how much attention should we pay to experiments that tell us how inexperienced people behave when placed in situations with which they are unfamiliar, and in which the incentives for thinking things through carefully are negligible or absent altogether?" (1994, p. 184). These remarks suggest that Binmore believes that choice sometimes reveals faulty thinking about unfamiliar circumstances rather than preference.

It is possible to know what game people are playing before knowing how they play it. But knowing the permissible strategies and monetary payoffs is obviously not enough. Game theorists, like rational choice theorists generally, also need to know what the players believe and what they prefer.

3 Disentangling motives to cooperate

Sometimes there is a great deal of cooperation when there is more wealth to be had by defecting, sometimes there is very little. There is also a domain of behavior where what people has a much more significant effect on the well-being of others than on their own wealth. At the end of the day I'm unlikely to be richer or poorer for giving good directions to a lost traveller, surrendering my seat on the bus to an invalid, or giving a student good advice about what to do after graduation. Shouldn't economists be concerned with the great variety of motives besides material self-interest?

One might reasonably object that there is no reason why the same wide variety of motives should be important in all realms of society and that progress in explaining and predicting behavior will come from slicing off domains in which only a few motives are significant. A general theory of human motivation that will succeed in all applications is too much to hope for. Why then can't economists limit themselves to those fields in which the pursuit of selfish material interest predominates?

Sometimes economists can, but they do not want to be confined to such a small domain, and even when the pursuit of material self-interest predominates, other motives may be significant. As many have noted, economies would collapse if individuals only told the truth, kept their promises, and fulfilled the implicit terms of incomplete contracts when it was in their material self-interest to do so. Economists should also be concerned about what happens at the boundaries of the domains in which the pursuit of material self-interest predominates.[6]

To what extent, then, can one make use of the apparatus of game theory to understand the many reasons why people are bloody-minded pigs in some circumstances and kind and trustworthy cooperators in others, and, in particular, to what extent can game theory help one to identify the role of the trust mechanism in this complicated story? One answer is that game theory cannot help with these problems at all, because game theory takes for granted people's beliefs and preferences and the extent to which people are rational; and so game theory cannot help one to understand the blunders, odd beliefs, or particular preferences that lead to cooperation in a one-shot prisoner's dilemma. But this answer is too pessimistic. At the very least game theory can be a powerful diagnostic tool to help disentangle the different motives that may encourage cooperation. The apparatus of game theory might also be extended to model the dependence of preference on strategies and beliefs.

It might appear impossible ever to disentangle the factors, even in very simple games, that influence how people choose. In particular it might seem impossible to determine how significant a role the trust mechanism is playing. So let me sketch a program of experimental research that might determine how important trust is. What follows does not answer the questions I raised about the trust mechanism. It only sketches a program of research whose results might contribute to answering them. Some relevant work has been done (Good 1988), especially by psychologists, but that work does not always attend to the strategic complexities to which game theory makes one sensitive. Here is how I propose to proceed.

First one needs to distinguish those cases in which "blunder" and "odd beliefs" about the structure of the game explain cooperation in a prisoner's dilemma or similar game. This is obviously not easy to do, but patient testing of how well subjects understand the game and how they support their choices will fallibly accomplish this. The cooperation that remains must be explained in terms of the preferences of the subjects. I do not believe that one can explain how people choose if one supposes that their preferences depend only on the material payoffs. If asked whether to accept a gift of $3 for themselves and $7 for someone else or to get nothing, far fewer subjects will turn down the gift than will refuse such an offer in an ultimatum game. Whenever different preferences attach to the same set of material payoffs, something besides the payoff must matter. The framework of game theory enables one to categorize the features that may influence preferences:

  1. The "material" payoffs. Even if, as I believe, people care about other things, too, most presumably care about the outcome for themselves. Altruistic, fair-minded, or malevolent players also care about the pay-offs to the other players.
  2. The strategies and rules. Players may care not only about how much money they wind up with, but about how they got it. Preferences may depend on both the set of permissible strategies and on what strategies are actually chosen. Some of these preferences may be bizarre: faced with a choice between playing "left" or "right" an individual may choose "right" out of a horror of anything associated with communism. But there may be system and rationale governing preferences for strategies, too. For example, as Peter Diamond pointed out, people may prefer to flip a coin to determine which of two individuals receives a benefit rather than directly choosing whom to benefit.
  3. The other players. Players may care about who their opponents are, what the other players prefer, what strategies the other players adopt, and what the other players believe about oneself. I may play very differently against MT if MT is Mother Theresa than if she is Margaret Thatcher. I have different expectations about how Mother Theresa and Margaret Thatcher will play, and I may care differently about what payoffs they receive, and faced with a choice between larger and smaller payoffs for both me and my opponent, I might choose the smaller to spite the other or to insure that my performance relative to theirs is better. Recently Geanakoplos, Pearce and Stacchetti (1989) and Rabin (1993) have explored ways of making payoffs depend on beliefs about the other players.
  4. Concepts and initial expectations. Players may make surmises about what others will do on the basis of general social norms and expectations rather than on the basis of beliefs about the particular individuals they are playing, and they may respond to the moves of others very differently depending on what their initial expectations are. In a prisoner's dilemma (unlike the one above) in which the payoff from mutual cooperation is only slightly less than the maximum payoff from defecting but in which it is very costly to be a sucker, rates of cooperation might be highly sensitive to expectations about what others will do.

In the one-shot prisoner's dilemma above, for example, the utility of the mutual cooperation outcome for B may depend on the dollar amounts, what strategies A and B played and could have played, what B believes about A's identity, motives, and beliefs and on B's concepts and initial expectations. To isolate the influence of the trust mechanism, one is (obviously) going to have to consider the effects of other factors.

The general strategy for identifying what motives are relevant and for determining their strength is in principle very simple. To investigate why people cooperate, study how cooperation is influenced by variations in the games. For example, suppose one hypothesized that people cooperate in the above one-shot prisoner's dilemma because they care only about the payoff for the other player. In that case cooperation is, of course, a dominant strategy. This hypothesis could be tested by offering subjects in the role of A a choice between the top and bottom rows in just one column of the payoff matrix of the prisoner's dilemma. B is perfectly passive. If one finds that the rates of altruistic choice here match the rates of cooperation in the one-shot prisoner's dilemma, then one has confirmed this altruism hypothesis. If (as I would predict) one finds lower rates of altruistic choice here, then one has powerful reason to believe that something other than or additional to altruism is driving cooperation in the one-shot prisoner's dilemma.

There are many models that use altruism to explain apparent cooperation, especially in more complicated games. I think that game theorists overemphasize altruism, both because it is relatively simple to model, and because economists are happier taking preferences as given than they are tackling the problems of explaining preferences. Moreover in more complicated games like the centipede game or an iterated prisoner's dilemma, A's uncertainty about whether B is an altruist (or whether B believes A is an altruist and so forth) can lead non-altruists to play quite cooperative strategies. Although I admire the ingenuity of this work, I don't think that the explanations offered are entirely satisfactory.

Consider, for example, the fine paper by McKelvey and Palfrey (1992), which shows that the pattern of play in a simplified version of the centipede game is consistent with the strategic exploitation of the existence of a small percentage of pure altruists who always make the generous move. Here is one specific game they study:

On each move a player can take the larger of two piles of money, or he or she can pass. If the player passes, both piles are doubled and the other player gets to choose. Each player gets to choose twice. The first number represents the payoff to A, and the second represents the payoff to B. Since A does not know whether B is an "altruist" who will pass both times, low rates of altruism are consistent with high rates of cooperative play by rational players. If about 5% of the players are "altruists" and the varying estimates of this percentage by the players themselves are on average correct, then one can explain why more than 90% of the A players pass on their first move, nearly three-quarters of the B players pass on their first move, and so forth.

This is elegant work, and given the inequality in the payoffs, it seems to me that factors that may explain cooperation in other circumstances, such as a concern with fairness, probably play a relatively small role here. So the claim that the high rates of passing on early moves is due to strategic exploitation of uncertainty about whether players will pass later is certainly plausible. One should however be cautious about extrapolating this explanation to other games where other factors might be involved.[7] There are also questions about how to interpret their results. Should B's passing a second time be explained in terms of exogenous altruistic preferences, or is the rate of passing influenced by features of the game? Given a single isolated choice between $3.20 for oneself and $.80 for the other or $1.60 for oneself and $6.40 for the other (who gets to make no choices), what percent of subjects would sacrifice $1.60 in order to confer a $5.60 benefit on a stranger? If it is about 5%, then the centipede game shows how cooperative play can arise from strategic exploitation of uncertainty concerning exogenously given preferences. If it is larger or smaller than 5%, then the extent of "altruism" is not exogenous. B's last move depends not only on the monetary outcomes for A and B, but also on the fact that B got to make this choice in this game.

4 Measuring the influence of the trust mechanism

Although the trust mechanism might plausibly have a role in the centipede game, where passing could be construed as an announcement of trust, the trust mechanism can play no role in a one-shot prisoner's dilemma.[8] So to investigate the trust mechanism, one needs to look at some other games. Consider what happens if one modifies the one-shot prisoner's dilemma above by having A move first with the knowledge that what she chooses will be common knowledge before B moves. If the players are rational and materially self-interested or if the motivation to cooperate is entirely altruistic, the rate of cooperation by A and B should be unaffected. There is evidence however that the rate of cooperation increases on the part of both players, with a more significant increase in the rate of cooperation of the second player. The only study I know of (Wrightsman 1966) does not directly compare sequential and simultaneous versions. In a prisoner's dilemma similar to the one presented above, Wrightsman finds that more than one-third of those who move first cooperate,[9] that two thirds of the players respond to cooperation with cooperation and that less than one-seventh of the players cooperate in response to defection.[10] Many second movers seem to be playing a strategy that is unavailable in an ordinary prisoner's dilemma: cooperate in response to cooperate and defect in response to defect. In a simultaneous version of Wrightsman's game probably about one-quarter of the moves would be cooperative.

Why might A cooperate more frequently in the non-simultaneous game? I am, of course, interested in the explanation in terms of the trust mechanism. By cooperating A is making manifest to B that she trusts him, and she expects her doing so to motivate him to reciprocate. But this trust mechanism is, of course, not the only possible explanation. A might have different preferences for the strategies of cooperating versus defecting when she goes first (which show up as different preferences for the monetary payoffs), or her beliefs about what B believes and how he will choose might be different. One could investigate how significant a factor the desire not to appear selfish might be by comparing rates of "cooperation" by agents in degenerate prisoner's dilemma games where players choose between the rows of single columns of the payoff matrix of the prisoner's dilemma game above.

I conjecture that the more important explanation for why the first player cooperates more often in the sequential one-shot prisoner's dilemma than in the ordinary prisoner's dilemma is that A thinks that B is more likely to cooperate in response. One thing speaking in favor of this conjecture is that such a belief is true. When A cooperates, then B cooperates much more frequently than in the regular prisoner's dilemma and much more frequently than if A defects. Let us then shift our attention to B. Why would he be more likely to cooperate after A has cooperated and less likely to cooperate after she has defected?

I can think of four motivations that explain these results:

  1. Sucker avoidance. If A plays "C", B no longer faces a risk of getting the 0 payoff, of losing to A, or of suffering the shame of being a sucker.
  2. Fairness and reciprocal altruism. B wants to behave fairly and to reward kindness with kindness. After A plays C, it is unfair to play D, after she plays D, it is fair. A's "cooperating" could lead B to think that she is a nice person and to feel more altruistic toward her (but, as Matthew Rabin pointed out to me, B might worry that A is in fact the sort of person who would defect if she were in his place and that she is playing "C" strategically in order to elicit cooperation from him). In the ordinary prisoner's dilemma, in which B does not know how A plays, none of these issues arise, and defection does not seem unfair.
  3. Shame avoidance. B does not want to behave worse than A behaves. By defecting after she cooperates, he would be embarrassed, but there is nothing embarrassing about repaying defection with defection. Defecting in an ordinary prisoner's dilemma in contrast can be justified by fear that A would defect.
  4. Trustworthiness. B takes A's cooperation as an announcement "I trust you" and is responding to this overture.

How can we determine the importance of the last explanation in terms of the trust mechanism? The significance of each of these possible explanations might be investigated as follows:

1. If sucker avoidance is part of the explanation for the increased cooperation, then one should find that when faced with a single choice between the rows of only the first column of the prisoner's dilemma payoff matrix, people choose (2,2) more frequently than they cooperate when playing an ordinary prisoner's dilemma and that when faced with a choice between the rows of the second column, they choose (0,3) less frequently than they choose to cooperate in the prisoner's dilemma. I would hypothesize that only the second of these two consequences will be found in the laboratory. But there is still the issue of feeling like a sucker. One might get some evidence on the strength of this factor by exploring the consequences of labelling the (0,3) payoff as the sucker's payoff.

2. I've already commented on altruism as an explanation. If what explained cooperation were unconditional altruistic concern for the other, then the rates of cooperation in the simultaneous and sequential one-shot prisoner's dilemma would be just the same. But people can feel more kindly toward those who they believe to be kind, and despite suspicions that initial cooperators might be cynical tacticians rather than kind souls, altruism could be enhanced.

This version of the second explanation, like explanation 3 in terms of avoiding embarrassment can be tested by considering a 3-person game where B's payoffs depend on how A plays, A's payoffs depend on how F plays and F's payoffs depend on how B plays.

The three payoffs are respectively to F, A and B. Defect regardless of what others do is still a dominant strategy for all of the players, but it doesn't matter to B's payoff how F plays, to F's payoff how A plays, or to A's payoff how B plays. Suppose F moves first and plays "C" and then A and Bplay simultaneously. If what is driving B's greater rate of cooperation in the two-person game when A opens by cooperating is a desire to be fair or not to look bad compared to the person who has cooperated, then one should find a comparable increase in the rate of cooperation here. As far as I know, nobody has done the experiment.

5. How can one determine how important the trust mechanism is in explaining the increased cooperation in the sequential prisoner's dilemma? The fact that many subjects in Wrightsman's experiment mentioned "trust" provides some prima facie reason to think that this mechanism might be significant. One thing that can be done easily is to consider the effect of making considerations of trust more or less salient. For example, the game could be presented in extensive form with A's choices labelled "trust" or "play it safe," and B's choices (in the circumstances in which A "trusts") as "fulfill the trust" or "betray the trust."

More interestingly, one could contrast the sequential game described above, in which A knows that her move will be common knowledge before B moves, with a different sequential game in which A is not told that B will know how she moved before he gets to move. After A moves, B is told her move and is told that A was not informed that her move would be announced to him before he moved. Reciprocal altruism should have a more pronounced effect than in the sequential game with common knowledge of A's move, because A's initial cooperation cannot be strategic. Sucker avoidance should have exactly the same effect on B. The effect of shame avoidance can be held constant if B is told that A will learn later that B made his choice after knowing what A chose. But without it being common knowledge that B knows what A chose before he chooses, there is no way for A to make a trusting overture to him. In this way I think one might be able to learn something about the comparative importance of the trust mechanism.

5 Conclusions

I have some research funds and some friends with experience in economic experimentation, and I plan on carrying out the experiments sketched above. I expect to confirm the results suggested by Wrightsman's study and to find that the level of cooperation of B in the sequential game without common knowledge in response to cooperation by A will be between the level of cooperation in the ordinary prisoner's dilemma and the level of cooperation in the sequential prisoner's dilemma with common knowledge. I expect that making it possible for the trust mechanism to operate will make a significant difference to the results. Needless to say I may find out that these predictions are all wrong.

However it turns out, such an investigation is likely to underestimate the importance of the trust mechanism, because of the controls that make it impossible for the actions of individuals to influence their reputations. However necessary these controls are to make definite what game individuals are playing, they are problematic if one is investigating trust, for concerns about reputation are (I conjecture) among the most important motivations for trustworthy behavior. When Oliver Twist is kidnapped by Nancy and Sikes, he is more distressed that his benefactor Mr. Brownlow will think he has stolen his books and his money than at losing his opportunity for a better life.

"Oh, pray send them back; send him back the books and money. Keep me here all my life long; but pray, pray send them back. He'll think I stole them; the old lady--all of them who were so kind to me will think I stole them. Oh, do have mercy upon me, and send them back!" (1838, p. 152)

To incorporate reputation, one will have to consider much more complicated games.

Many more investigations can be carried out. How is the influence of trust affected by the size of the payoffs? What would happen if the encounters (although still among anonymous strangers) were face-to-face? What effect do the culture and upbringing of the subject have? How powerful might examples of trust and distrust be? And so forth. But the results of even these primitive experiments could have implications for policy. If the experiments show that manifesting trust matters significantly, they imply that one can facilitate cooperation by making it possible for it to be common knowledge that people are counting on one another. Citizens who know that others are counting on them and that the others acted as they did in part because they knew that their fellow citizens would know that they were counting on them can be motivated to reward this trust. If opportunities for placing and rewarding trust are widely available, especially in contexts where the costs of misplaced trust are initially low, one can build a fabric of good will and mutual confidence that facilitates cooperation and that is a good thing in itself.

References

  • Bacharach, Michael and Susan Hurley, eds. 1991. Foundations of Decision Theory: Issues and Advances Oxford: Blackwell.
  • Becker, Lawrence. 1966. "Trust as Noncognitive Security about Motives." Ethics 107: 43-61.
  • Binmore, Ken. 1994. Playing Fair. Cambridge, MA: MIT Press.
  • Coleman, James. 1990 Foundations of Social Theory. Cambridge: Harvard University Press.
  • Dickens, Charles. 1838. Oliver Twist. rpt. New York: New American Library, 1961.
  • Deutsch, Merton. 1958. "Trust and Suspicion." Journal of Conflict Resolution 2: 265-79.
  • Gambetta, Diego. 1988a. "Can We Trust Trust?" pp. 213-37 of Gambetta, ed. (1988b).
  • Gambetta, Diego., ed. Trust: Making and Breaking Cooperative Relations. Oxford: Blackwell, 1988.
  • Geanakoplos, John, David Pearce, and Ennio Stacchetti. 1989. "Psychological Games and Sequential Rationality." Games and Economic Behavior 1: 60-79.
  • Gilbert, Margaret. 1990. "Walking Together: A Paradigmatic Social Phenomenon," in Midwest Studies in Philosophy, vol. 15 The Philosophy of the Human Sciences. Eds.: Peter French, Theodore Uehling and Howard Wettstein. Notre Dame, IN: University of Notre Dame Press, pp. 1-14.
  • Good, David. 1988. "Individuals, Interpersonal Relations, and Trust," pp. 31-48 of Gambetta, ed. (1988).
  • Hardin, Russell. 1991. "Trusting Persons, Trusting Institutions," pp. 185-209 of R. Zeckhauser, ed. Strategy and Choice. Cambridge, MA: MIT Press.
  • _____. 1993. "The Street-Level Epistemology of Trust." Politics and Society 21: 505-29.
  • Hirschman, Albert. 1985. "Against Parsimony: Three Ways of Complicating Economic Discourse." Economics and Philosophy 1:.
  • Holton, Richard. 1994. "Deciding to Trust, Coming to Believe." Australasia Journal of Philosophy 72: 63-76.
  • Jones, Karen. 1996. "Trust as an Affective Attitude." Ethics 107: 4-25.
  • Luhmann, Niklas. 1979. Trust and Power. Chichester: Wiley.
  • McKean, R. 1975. "Economics of Trust, Altruism and Corporate Responsibility," in E. S. Phelps (ed.) Altruism, Morality and Economic Theory. New York: Russell Sage Foundation.
  • McKelvey, Richard and Thomas Palfrey. 1992. "An Experimental Study of the Centipede Game." Econometrica 60: 803-36.
  • Marshall, Alfred. 1930. Principles of Economics. London: Macmillan.
  • Pettit, Philip. 1995. "The Cunning of Trust." Philosophy and Public Affairs ??
  • Putnam, Robert D. 1993. Making Democracy Work: Civic Traditions in Modern Italy. Princeton: Princeton University Press.
  • Rabin, Matthew. 1993. "Incorporating Fairness into Game Theory and Economics,"American Economic Review 83: 1281-1302.
  • Rhoads, Steven. 1985. "Do Economists Overemphasize Monetary Benefits?" Public Administration Review 45: 815-20.
  • Rowthorn, Bob. 1996.
  • Tuomela, Raimo. 1995. The Importance of Us: A Philosophical Study of Basic Social Notions. Stanford: Stanford University Press.
  • Wrightsman, L. 1966. "Personality and Attitudinal Correlates of Trusting and Trustworthiness in a Two-Person Game." Journal of Personality and Social Psychology 4: 328-32.

[1]Consider these comments from a volunteer worker in the Charlottesville 'meals on wheels' program "you get so attached to the people on your route. As soon as I started it, I got hooked. Now I really look forward to it." "I have 18 stops to make and everybody wants to talk. Often, I'm the only person who stops by. If you could just see how grateful they are, you'd know why I've been doing this for two years" (quoted in Rhoads 1985, p. 818). This is not the place to settle the definition of altruism, but it is clear that there are powerful non-altruistic motivations at work here.

[2]To fulfill a trust out of a prudential regard for one's reputation is arguably to show virtue not to mimic it, and Pettit takes prudence as a third admirable reason to fulfill a trust (besides loyalty and virtue). There is a line between a prudent concern for reputation and a concern for the regard of others that goes beyond prudence, but I shall not attempt to draw that line here and shall in this way vulgarize Pettit's account.

[3]Russell Hardin's view that trust is "encapsulated interest" (1991, 1993) is extremely implausible unless one takes an expansive view of "interest.") Most people's concern about their reputation and status cannot be explained entirely by the instrumental importance of reputation and status to wealth, and their concern with self-respect appears to be intrinsic. (People seek wealth and commodities in part because they seek self-respect. Very few seek self-respect because it will make them wealthier.

[4]As Rowthorn (1996, p. 20) points on, assuming that only self-interested material sanctions will motivate people "... fails to utilise the moral capacities of people, and worse still, undermines these capacities by denying them social recognition and denigrating them as irrational and abnormal." The "ultimate effect...is to legitimize behaviour which should be morally unacceptable."

[5]As philosophers like Margaret Gilbert (1990), Raimo Tuomela (1995) and Julius Sensat (unpublished) have pointed out, individuals may also be pursuing collective rather than individual aims. I shall not, however, comment further on this interesting possibility.

[6]"We know tragically little about how to produce some of the most important goods in life--mutual respect, friendliness, cohesiveness, a sense of belong, peace of mind. With either private or public property rights we are apparently unable to perceive how to manufacture such valuable commodities" (McKean 1975, p. 30).

[7]Suppose the game were modified so that B's passing at the second move led to an equal payoff for A and B. One might find much higher rates of "altruism."

[8]This is of course not to say that trust in some other sense might not play a role. Furthermore, players who are allowed to see one another may interpret all sorts of cues as indicating that others are placing trust in them.

[9]Wrightsman carries out two experiments. In the first he finds that about one-third are "trusting" and in the second about 40% are trusting. A subject is "trusting" if "the subject chose C, expected the other person to choose C (i.e. to cooperate), and gave as his reasons for this a concept of trust, fairness or cooperation. Any choice of D, or the choice of C with the expectation that the other would pick D was classified as distrusting, when the person gave as his reason distrust or fear of the other's response" (1966, p. 329) [notation changed]. It is impossible to tell what overall percentage of first movers cooperated, and it is not clear how players were classified who cooperated and expected the second player to cooperate but did not give the right reasons.

[10]It is interesting to notice that the rate of cooperation in response to defection (about 12%) is more than twice as large as the percentage of "B" players in McKelvey's and Palfrey's study who pass on their second move. The subjects in Wrightsman's study are much nicer.