A NEW ANCHORING PARADIGM: OR IS THERE ONE?
A NEW ANCHORING PARADIGM: OR IS THERE ONE?
John L. Tan, Fresno Pacific University
The observation of a new anchoring phenomenon has been reported by Wilson, Houston, Etling, & Brekke (1996). In the more typical (one might say "classical") anchoring paradigm, an anchor is usually compared to the answer of a target question, or it is at least informative with reference to the latter. The outcome of such a procedure is that the target answer usually ends up biased toward the anchor. The "new" observation reportedly claimed that the same phenomenon could be achieved without having the anchor thus compared, or being informative. This study attempts to explore the reported phenomenon further. The results, however, called this "new" observation into question. At the same time, the robustness of the "classical" anchoring phenomenon is further confirmed.
The anchoring effect has consistently been shown to be a robust phenomenon in many situations. Tversky and Kahneman (1974), for example, had asked participants whether the percentage of African nations in the United Nations was higher or lower than a given "random" number obtained by a spin of the "wheel of fortune". Thereafter, participants were asked to guess the actual percentage. Results showed that participants who were given a high anchor gave significantly higher answers than those who were given a low anchor.
Variations of anchoring have been observed to be effective in influencing consumers’ buying decisions (Wansink, Kent, & Hoch, 1998), effort or task-motivation (Switzer III & Sniezek, 1991), the estimate of the likelihood of a nuclear war (Plous, 1989), judges’ perception on whether someone is lying (Zuckerman, Koestner, Colella, & Alton, 1984), participants judgment of self-efficacy (Cervone & Peake, 1986), decisions in gambling (Schkade & Johnson, 1989), students’ prediction of the number of their colleagues contracting cancer (Wilson, Houston, Etling, & Brekke, 1996), and even experts’ opinion on the fair market value of a piece of real estate (Northcraft & Neale, 1987).
While the anchoring phenomenon is easily observable, explaining it has proven to be more elusive. Tversky and Kahneman (1974) postulated the heuristics and biases model which assumes that one would first consider the anchor as a possible standard or starting point. Subsequently, one would adjust from the initial value in either direction until a more plausible answer is found. Such adjustments are usually insufficient (Slovic & Lichtenstein, 1971, as cited in Tversky & Kahneman, 1974), resulting in biases towards the initial value—or the phenomenon we called anchoring.
An alternative, or perhaps complementary, explanation—the selective accessibility model—was proposed by Strack and Mussweiler, 1997. This model borrows two important notions from social cognition research, namely, hypothesis-consistent testing and semantic priming (Mussweiler & Strack, 1999). The selectivity portion assumes participants adopt a hypothesis-consistent test strategy or a positive test strategy (Klayman & Ha, 1987) to compare the anchor to the question at hand. Accordingly, this process selectively generates or retrieves knowledge consistent to the task, increasing the accessibility to that knowledge. Hence, the accessibility portion of this model assumes participants would subsequently apply this knowledge that is now easily accessible to the question that requires the absolute answer.
Wilson et al. (1996) noted that in most studies the anchoring process was achieved by either asking participants to compare the anchor to the target question, or the anchor was otherwise uninformative rather than arbitrary. The former technique is indeed typical—in fact, classical; an example of which has been mentioned above in the works of Tversky and Kahneman (1974). An example of the latter can be found in Northcraft and Neale (1987) where real estate agents were provided with, among other things, the listing price for a local house before being asked to appraise its fair market value. Wilson et al. then proceeded to examine the possibility of an anchoring effect where participants were neither asked to compare the anchor to the target value nor were they provided with an anchor that was informative. In their study, for example, some participants were asked to note whether their "random" identification number (i.e. the anchor) was written in red or blue ink before answering the target question. Results showed that anchoring effects took place even though the participants did not have to process the anchor as a numerical value, since they only had to note the color in which it was written. In other words, anchoring occurs when people are merely exposed to a number before doing a task that calls for a numerical estimate.
Wilson et al. (1996) speculated that, among other possibilities, the backward priming hypothesis may account for the anchoring phenomenon. According to Kahneman and Knetsch (1993, as cited in Wilson et al.), the need to answer a question triggers a search for possible answers, during which, any plausible value in short-term memory may be considered as a possible answer.
The Mailbox Group
In this study, I explored the above notion. If for want of an answer, a person initiates a search, and found in her short term memory a numerical value that has been associated with a totally unrelated item, category or event, would he/she nevertheless be influenced by it? For example, in searching for an answer to the number of cities with a population of more than a million inhabitants, a person may come across someone’s mailbox number—not an ambiguous or arbitrary number, but clearly, one that has already been assigned to something but which is irrelevant to the target of the search. Would the mailbox number influence the person’s answer despite its obvious irrelevance? One of the groups (the "mailbox" group) assigned in this study was designed to test this question. The atypical studies in Wilson et al. (1996) showed that anchoring effects are more widely applicable than previously thought. Those studies also suggested that people are generally not cognizant of the anchoring effects. Therefore, I hypothesized that despite the fact that the anchor is associated with something irrelevant, it is still effective in producing the anchoring effects.
The Distracted Group
Another question pertains to whether the influencing effects of anchoring can be distracted or interrupted by a filler task inserted between the exposure of the anchoring and the presentation of the target question. The "distracted" group in this study is designed to answer this question. It must be noted here that in Study 3 of Wilson et al. (1996), participants copied a page of numbers (the anchors) and words as part of a handwriting exercise before being presented with the target question. This procedure did not produce an anchoring effect. Wilson et al. speculated that participants may have thought that the "handwriting study" and the target question were two different studies, and thus held this separation in their mind so that the numbers used in the former task did not affect the latter.
In the current study, however, participants were not told that the two tasks (viz., the filler task and the target question) were separate studies. Furthermore, the filler task did not involve any numerical value so that if participants did indeed search for a number in their memory, the filler task should not have stood in the way. Therefore, I hypothesized that anchoring would take place in spite of the filler task.
The Control, Classical and Attention Groups
Beside the typical control group where participants were not exposed to a number, I added a "classical" group where participants were asked to compare the anchor to the target as done in most studies within the anchoring paradigm. Finally, an "attention" group was included to replicate one of the studies from Wilson et el. (1996) in which participants merely gave attention to the anchor without being asked to do any comparison, or given any informative cue.
Method
Participants
Two hundred and sixty-five students from Fresno Pacific University participated in the study. The study was conducted in several classes during actual class time. Participants from two of the classes were offered extra credit as an encouragement by their respective instructors while the rest were not.
Apparatus
Slips of paper, each with either the number 491, or a three-letter code, were folded to conceal their contents. The folded pieces with the alpha code were then marked with either the word "blue" or the word "yellow". Those with the numeric code were marked with either the word "red", "green", or "purple". An equal number of slips from each color were then placed in an envelope for random drawing, whereupon corresponding color folders containing the appropriate manipulations were distributed accordingly.
Procedure
Participants were randomly assigned to one of five groups as described above. After receiving their respective folders, the participants were instructed to unfold their drawn pieces of paper to find their "random" code printed therein. Although participants were informed that each of them possesses a random code in their slips, in reality, everyone had the number 491, except for the blue and yellow group who had the alpha code instead.
To begin, participants were asked to write their random code on the answer sheet contained in the folder. Those with the numeric code were further asked to remember their number on the pretext that I wanted to collect the folders afterwards, according to the sequence of their numbers. This was simply a ploy to get the participants to give attention to the anchor (Wilson et al., 1996).
For the blue group (i.e. the control), and the red group (i.e. the "attention" group), participants were asked to give an answer to our target question, "How many cities in the World have populations of more than a million inhabitants?" (Note that the only difference between these two groups is the fact that participants in the control group were exposed to the alpha code while those in the "attention" group, the numeric value.)
For the yellow group (i.e. the "mailbox" group), participants were informed that my campus mailbox number is 491, and that they should write it down on a piece of "peel-off" note provided. Furthermore, they were told not to forget the number because "later, you will be asked to send a section of the completed content back to me via campus mail."
For the green group (i.e. the "distracted" group), participants were introduced to a simple filler task under the heading "section 1" before they proceeded to "section 2" where the target question was presented. The filler task consisted of six questions presented in two columns so that participants joined words like "cat" and "dog" in one column to words like "claw" and "paw" in the other column.
For the purple group (i.e. the "classical" group), as in many typical studies on anchoring, participants were asked to compare the anchor to the answer of the target question and decide whether the former was greater than, lesser than, or equal to the latter. Subsequently, the target question was presented.
Finally, all participants were asked to rate, on a nine-point Likert scale (from "Just a wild guess" to "I know so!"), how confident they felt about the answer they had given. Although knowledge about the target answer is not a factor in the current design, previous studies (Wilson et al., 1996; Chapman & Johnson, 1994, as cited in Wilson et al.) have found knowledge to moderate the effects of anchoring. Given the target question, it is unlikely that many participants would be very certain about the answer. Nevertheless, the purpose of the confidence scale is to eliminate participants with a high level of certainty, that is, either the seventh, eighth or ninth point on the Likert scale.
Results
Of the 265 observations, 20 from a particular class were destroyed because the instruction for participants to unfold the slips of paper containing the "random" codes was not given prior to the instruction to begin working on the questions. Consequently, the intended manipulations could not be ensured. Of the remaining 245 observations, five were discarded because the participants did not answer the target question with a specific numerical value. As expected, only six participants had claimed a certainty level of seven and above. Their data were eliminated.
At a glance, a few answers like 1,000,000,000 and 300,000,000 were obviously outliers. However, using the guideline suggested in the SPSS® Base 9.0 Applications Guide (1999), I eliminated seven cases from the control group, six cases from the "attention" group, eight cases from the "mailbox group" group, seven cases from the "distracted" group, and four cases from the "classical" group. A value is deemed an outlier when it falls below the lower hinge minus 1.5 times the hspread or above the upper hinge plus 1.5 times the hspread. The resulting number of observations finally rested at 202, with 42 for the control group (mean of answers to target question, M = 45.5, SD = 45.1), 38 for the "attention" group (M = 92.9, SD = 136.7), 42 for the "mailbox" group (M = 42.3, SD = 56.3), 42 for the "distracted" group (M = 160.8, SD = 298.9), and 38 for the "classical" group (M = 291.0, SD = 287.5, see Figure 1).
An alpha level of .05 was used for all statistical tests. A one-way between-groups analysis of variance (ANOVA) showed a significant main effect of the anchoring manipulations, F (4, 201) = 10.93, p < .001. However, a Tukey HSD test found a significant difference only between the means of the control group and the "classical" group (p < .001) while all other differences of means relative to the control were insignificant. Interestingly, all differences of means relative to the "classical" group were significant (between "classical" and "distracted", p < .05; all others, p < .001, see Figure 1). Perhaps it is worth noting that the difference between the "distraction" group and controls was in fact very close to being significant (p = .056).
Discussion
It is clear that the "classical" anchoring phenomenon is robust as ever. However, the "new look at anchoring" or "basic anchoring" effects, as suggested by Wilson et al. (1996), deserve to be explored further. Although Wilson et al. were able to produce an anchoring effect simply by exposing their participants to a number, an attempt to replicate this effect was not successful.
In one of their explanations regarding the "new" anchoring phenomenon, Wilson et al. (1996) employed the backward priming hypothesis (Kahneman & Knetsch, 1993, as cited in Wilson et al.) to account for their observation. Accordingly, the need to answer a question triggers a search for possible answers, during which any plausible value in short-term memory may be considered as a possible answer. The results of the current study do not support this hypothesis. Otherwise, the anchor would have achieved its purpose in the "attention" group (p = .82), since the group was presented with the target question immediately after attending to the anchor. Furthermore, the "distracted" group should be the least affected because the filler task they did would have displaced whatever number may have been in their short-term memory. But this was not the case. Clearly, based on an alpha level of .05, its difference was statistically insignificant, but the "distracted" group (p = .56) was at least marginally more influenced by the anchor than the "attention" or the "mailbox" group (p = 1.0).
Two other theoretical models discussed earlier—the heuristics and biases model (Tversky and Kahneman, 1974) and the selective accessibility model (Strack and Mussweiler, 1997)—assume that participants need a starting point from which to adjust. Such a starting point is naturally provided by the "classical" anchor since participants were procedurally forced not only to give attention to the anchor but also to consider it as a possible answer to the target question. Therein, I think, lies the reason for the robustness of the "classical" anchoring. In all the other conditions manipulated in this study, however, no such procedural requirement was imposed on participants. Thus, whether a person seizes the number she has most recently attended to is a matter of conjecture. Although there exists a possibility that one might do just that, there might also be a variety of other numbers that one might prefer to start from. For example, one might initiate the adjustment from one’s birthday, favorite number, home address, a recent grade, today’s date, the current time, or a friend’s phone number. The possibilities are endless. Any of these numbers may be easily retrieved from memory, and once retrieved, is just as heuristically viable as the uninformative anchor. In other words, there appears to be no heuristic incentive for a person to use the uninformative anchor over any other number as the initial value.
Then again, there might be other cognitive routes or mechanisms at play. While the current study is not designed to determine any of these, it is clear at this juncture that more investigation is needed to see if there might indeed be an anchoring phenomenon other than the "classical".
References
Cervone, D., & Peake, P. (1986). Anchoring, efficacy, and action: The influence of judgmental heuristics on self-efficacy judgments and behavior. Journal of Personality and Social Psychology, 50, 492-501.
Chapman, G. B., & Johnson, E. J. (1994). The limits of anchoring. Journal of Behavioral Decision Making, 7, 223-242.
Kahneman, D. & Knetsch, J. (1993). Strong influences and shallow inferences: An analysis of some anchoring effects. Unpublished manuscript, University of California, Berkeley.
Klayman, J., & Ha, Y. W. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94, 211-228.
Mussweiler, T. & Strack, F. (1999). Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model. Journal of Experimental Social Psychology, 35, 136-164.
Northcraft, G. B., & Neale, M. A., (1987). Experts, amateurs, and real estate: An anchoring-and-adjustment perspective on property pricing decisions. Organizational Behavior and Human Decision Processes, 39, 84-97.
Plous, S. (1989). Thinking the unthinkable: The effect of anchoring on likelihood estimates of nuclear war. Journal of Applied Social Psychology, 19, 67-91.
Schkade, D. A., & Johnson, E. J. (1989). Cognitive processes in preference reversals. Organizational Behavior and Human Decision Processes, 44, 203-231.
Slovic, P., & Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational Behavior and Human Performance, 6, 649-744.
SPSS® Base 9.0 Applications Guide (1999). Chicago, IL: SPSS Inc.
Strack, F. & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73, 437-446.
Switzer, F. S., III., & Sniezek, J. A. (1991). Judgment processes in motivation: Anchoring and adjustment effects on judgment and behavior. Organizational Behavior and Human Decision Processes, 49, 208-229.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1130.
Wansink, B., Kent, R. J., & Hoch, S. J. (1998). An anchoring and adjustment model of purchase quantity decisions. Journal of Marketing Research, 35, 71-81.
Wilson, T. D., Houston, C. E., Etling, K. M., & Brekke, N. (1996). A new look at anchoring effects: Basic anchoring and its antecedents. Journal of Experimental Psychology: General, 125, 387-402.
Zuckerman, M., Koestner, R., Colella, M. J., & Alton, A. O. (1984). Anchoring in the detection of deception and leakage. Journal of Personality and Social Psychology, 47, 301-311.
Figure 1. All differences of the means relative to "classical" were significant: between "classical" and "distracted", p < .05; all others, p < .001.