Psychologists make conclusions about individuals and their behavior through scientific research, but in what instances are the results of that research skewed due to research errors? How do researchers to avoid errors?
The Error
It turns out that the mind is so powerful that it can cause you to experience what you expect to, rather than what's really happening. This can be a problem for research if you are testing whether some manipulation has an effect on people... how do we know that it is really the manipulation that caused the effect and not simply the mind's expectation?
For example, many insist that there are "energy vortexes" -- places where if you visit you will feel the presence of "spiraling spiritual energy..." coming from the earth. If you ask those that have traveled to one they might describe feeling warm, tingly, energized, lightheaded, or strong. These testimonials can be convincing, and if you have been to a labeled vortex and experienced the effects, there might be no convincing you that it was merely a placebo effect. However, we cannot rule that out unless we conduct a careful study of these locations in which we can eliminate the possibility that a placebo effect is biasing our results.
WATCH: The Power of the Placebo Effect
Still not convinced that simply thinking something will happen can change your physical experience? The following video shows that creating a belief in someone can actually make them feel healed. Although the examples in the video below focus on pharmaceutical treatments, keep in mind that this could apply to anything that we give someone or do to them.
Avoiding the Placebo Effect
If we were testing the effect of caffeine on exam performance, we would be making a research error if we gave nothing to half of the students and gave coffee to the other half. Even if we found a difference, we wouldn't know if the caffeine actually increased performance or if those students did better because they were expecting to feel more alert and focused. The results could be biased by the placebo effect.
Instead, the control group would be given a placebo (a cup of decaf coffee) and participants would be blind as to whether the coffee they drank was the caffeine or the placebo until after the study is complete. Note that the control group has to be given a placebo -- something that is exactly like what the other group gets only without the key ingredient being tested.
The Bias
Similar to the placebo effect, a researcher’s own expectations of what the results will be could subconsciously influence the way that participants are treated (acting slightly more friendly) or how things are measured (grading essays). This is also known as the Pygmalion Effect or a Researcher Bias… see how the same concept applies to students:
WATCH: The Pygmalion Effect and the Power of Positive Expectations
Can the Pygmalion Effect apply to other situations? How about in the classroom?
Avoiding The Rosenthal Effect
In a blind study, the participants do not know if they have been assigned to the experimental or control groups, but the researcher does. This prevents the placebo effect, but it does not prevent the researcher from accidentally influencing participants or biasing the interpreation of observations (Rosenthal Effect).
In a double-blind study, neither the participants nor the researchers interacting with them know which experimental group they are in. One researcher assigns participants to categories and delivers the manipulation, but other naïve (unaware) researchers would be the ones to interact with participants and record observations. Thus, it is not possible that differences observed could be caused by the Rosenthal Effect because the researchers did not know which participants to expect differences from.
The Error
If participants are aware of the purpose of the study, their behavior may change. For example, some may try to guess what the researcher wants to have happen and behave in a way that confirms those expectations. In other cases, antisocial participants might act the opposite way, intentionally altering their response or behavior just to prove the researcher incorrect. It is important to note how this is different from the placebo effect… in this case the participants are not experiencing things based on their own expectations but are going along with that they think the researcher expects.
OPTIONAL: Malicious Focus Group Convinces Marketers Cinnamon Mountain Dew Is The Next Big Thing
This video is an amusing example of research participants intentionally changing their behavior to influence the outcome of a study.
Avoiding Demand Characteristics
In order to avoid demand characteristics in some studies, the participant will be naive as to what the real purpose of the study is, which group they’re assigned to, or at least what the expected results would be. Those interacting with participants also need to follow a research script and treat everyone in exactly the same way. Researchers may also go a step further, using deception to mask the true purpose of the study.
For example, if we wanted to determine whether a hotter room would cause more aggressive outbursts while playing a frustrating video game, we might tell participants that they are testing out a new video game and will answer some questions afterwards. We might not, on the other hand, say anything about the fact that we are manipulating the temperature or the room or counting verbal or physical outbursts while they play. We would debrief them afterwards and explain why we did not tell them all the details as to protect the integrity of the experiment.
It's always possible that they might try to GUESS the purpose of the study, and might become suspicious if they notice something unusual like a particularly warm or cold room. In the debriefing process after their participation, researchers typically ask some open-ended questions first to assess whether they participants might have guessed correctly and if that might have biased the way that behaved.
The Error
Some things are difficult to measure, like prejudice, sexual behavior, and addiction, because people are often reluctant to talk honestly in fear of negative social judgments. Instead, they may offer what they believe are socially desirable responses or behavior. It is important to note how this is distinct from demand characteristics… in this case the concern is more about fitting in than it is about influencing the results of a study. People are reluctant to admit that they think, feel and do things that are considered abnormal, taboo, gross, offensive or illegal.
Avoiding Social Desirability
If we're collecting data via self-report, the best we can do with this is to stress the importance of honest responses and design the study to ensure that an individual participant’s data is either collected anonymously (we don’t know who they are) or kept highly confidential (no one else will find out). Another approach to avoiding the bias is to collect data using other methods that do not rely on thoughtful self-report responses (e.g., psychophysiological).