Cognitive Biases

What are Cognitive Biases?

Cognitive Biases are systematic errors that predispose one's thinking in favor of a certain viewpoint over other viewpoints. The scientific method was developed, among other reasons, to counteract these biases in order to derive objective knowledge.

Biases are often result from using shortcuts in thinking called heuristics. Such shortcuts allow us to make quick decisions about otherwise complex problems by following instinctive rules of thumb. Although useful in many situations (such as rapidly determining the mood of someone you just met), heuristics can lead us astray. Many problems should be thought through with intention and deliberation. When we settle for quick heuristics in our thinking, we often end up acting according to our biases.

The National Institute of Health's website states that biases are... "prejudiced or partial viewpoints that affect someone's interpretation of a problem."

"In clinical investigations, a bias is any systematic factor other than the intervention of interest that affects the magnitude of (i.e., tends to increase or decrease) an observed difference in the outcomes of a treatment group and a control group."

"Bias diminishes the accuracy (though not necessarily the precision) of an observation."

http://www.nlm.nih.gov/nichsr/hta101/ta101014.html


We do not judge new information rationally, at least not as rationally as we think. The way we envision the world is different than how the world actually is. We develop models of the world in our minds that cohere with our worldview and self-image. Information about the world that conflicts with our internal world model and self-image creates the distressing feeling of cognitive dissonance. Our biases prevent us from seeing new information objectively. They help us preserve our worldview and self-image.

Psychologists have identified many cognitive biases. These are at work in our everyday thinking. Skeptical doctors would benefit from familiarizing themselves with them. The list below is not complete, but are important samples for understanding the sources of errors in our thinking. The exhaustive list can be found on the Cognitive Biases Wiki page.

Three important biases for skeptical doctors are Confirmation Bias, Selection Bias and Publication Bias.

Confirmation Bias

** Confirmation bias is the tendency to search for or interpret information in a way that confirms one's preconceptions, while ignoring information that does not support the preconceptions. We discussed this in the What is Science section. It is natural to seek only data that supports our ideas. People become emotionally attached to ideas. We experience cognitive dissonance when faced with evidence that suggests that our ideas are wrong, which is an unpleasant experience. Many avoid the possibility of being wrong by only searching for evidence that they are correct. In other words, they forget about Popper's important concept of falsification.

Confirmation bias is the fuel for volumes of claims. A single example is the claim that emergency rooms are busier on nights with a full moon. Many hold this claim as common knowledge and swear to the truth of it from personal experience. However, this claim is not supported by evidence. In fact, evidence contradicts the claim. Similarly, there is no measurable increase in suicides during a full moon.

So, the claim turns out to be a myth (let's call it the 'Full Moon Myth'). But why is it so prevalent in pop culture?

Once an idea such as the Full Moon Myth becomes known, many who work in emergency rooms will tend to take special notice of events occurring during nights with full moons. Emergency rooms are busy places in which "crazy" things happen all of the time. Proponents of the myth may go to work with the thought, "Oh no! There's a full moon tonight. Things are going to be crazy." Such a thought would prime the believer to remember the events occurring on the full moon nights, while forgetting the events on other 'common' nights.

Doctors can get into trouble by relying on certain treatments with which they had a few prior 'successes'. A practitioner may have once recommended a controversial (and potentially dangerous) drug to a patient with certain symptoms. Perhaps that particular patient's symptoms subsided for other reasons, and the patient gave the physician high praise for a job well done. The doctor may continue to recommend this treatment to other patients with similar symptoms. Perhaps a few of them begin to feel better and give the doctor similar praise. The doctor is likely to remember these cases with pride, all the while forgetting the patients that reported no benefit or significant side-effects.

The phrase "remembering the hits and forgetting the misses" describes the essence of the confirmation bias. Scientific researchers should employ 'blinding' to reduce the confirmation bias. If data is collected by researchers without knowledge of the presence or absence of the variable in question, the researchers would have no reason to place more significance of one piece of data over another.

Skeptical doctors should be cautious not to rely on memorable anecdotes over science.

A related concept is Selective Exposure Theory consisting of:

Selective perception - the tendency for expectations to effect perception.

- if confronting unsympathetic material, people do not perceive it, or make it fit their pre-existing opinion

Selective exposure - people keep away from communication of opposite hue.

and,

Selective retention - people simply forget the unsympathetic material


Selection Bias

** Selection bias is a distortion of evidence or data that arises from the way that the data is collected or the way that samples are selected to study. This is particularly important in medical studies.

For instance, if someone wants to study the effect of a diet 'supplement' on fitness levels, they may choose to advertise the need for study participants in a fitness journal. Perhaps 100 volunteers willingly sign up for the study, enthusiastic about trying the new supplement. The study may even match them to 100 other controls consisting of students at the researcher's university. Such a scenario automatically biases the study towards positive results. This is because the two groups are not comparable. One could have predicted that the study group would show increased fitness levels as compared to the control group. The study group consists of self-selected fitness enthusiasts. If allowed, the results would be useless.

This sounds rather obvious, however numerous practices have claimed support from studies that are plagued with selection bias.

Scientific researchers should employ 'randomization' to reduce the selection bias. By randomly selecting study groups and control groups from the same pool of subjects, researchers are less likely to conduct biased studies.


Publication Bias

** Publication Bias is a bias on the part of scientific journal editors and publishers, in which they are more likely to publish studies with positive results over those with negative results. Positive results are thought to be more likely to attract readership and sell journals. Positive studies are more likely to catch the eye of the mainstream media. Studies that show no effect or negative effects of the idea in question are not as interesting, or so it is thought.

Interestingly, John P. A. Loannidis pointed out in Why Most Published Research Findings are False, "in most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias."

So, not only are factors such as confirmation and selection biases contributing to potentially false results, those false positive results may be more likely to be published over true negative results. A 2000 BMJ study looked at 48 review studies from Cochrane and states, "26 (54%) of reviews had missing studies and in 10 the number missing was significant." Thus, the problem is very prevalent in medical science. A very helpful tool to find this is the "funnel plot" (see Detecting Publication Bias).

Publication bias can be thought of as a form of selection bias on the part of publishers. It falsely presents data to the scientific community that is biased toward an effect, rather than presenting all of the available data that, when taken as a whole, may show little or no effect at all. Recently, some responsible journal editors have called for researchers to publicly register studies and their designs before conducting the studies. This would allow the scientific community access to all of the relevant research in a particular area.

This is related to the 'File Drawer Problem', in which researchers tend to let negative studies sit in the 'file drawer' while actively perusing a positive study that will confirm their idea and have a greater chance of being published.

Other Cognitive Biases

Below are items selected from the Wikipedia Cognitive Bias page.

-- describes the common human tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions.

We are heavily influenced by the first consideration offered to a question. For instance, in an advertisement, we may be asked if we would consider paying $50 for a particular item. We then are told that the item is actually $35. We then are made to feel that the price is a bargain because all we have to compare it to is the $50 anchor. Studies have shown that anchoring can occur even when a completely unrelated concept is presented prior to considering the problem. In his book, Predictably Irrational: The Hidden Forces that Shape Our Decicions, Dan Aierly describes a classic experiment. Students were asked to write down the last two digits of their social security number. They then were asked to make bid on certain items as if in an auction. The last two digits of their social security numbers acted as the anchors in their bidding. Those with lower numbers tended to bid lower, and those with higher numbers tended to bid higher. Yet when asked, the students denied that their social security numbers had anything to do with their bids.

Anchoring bias influences medical decisions far more often than medical providers would like to admit. Consider a patient who is referred to a specialist for suspicion of a certain disease (let's call it 'Disease X'). The specialist may fail to consider other (perhaps more likely) diagnoses because the patient has already been labeled with 'Disease X'. We may become anchored to a diagnosis due to other biases such as (see below) the Availability heuristic or the Recency effect.

-- systematic errors made when people evaluate or try to find reasons for their own and others' behaviors.

People tend to attribute their own successes to their own innate abilities rather than circumstances. On the other hand, people tend attribute the success of others to the circumstances surrounding the success. One may think, "I did it because of my skills", however, "he did it because he got lucky".

The flip-side of this is that people tend to blame their own failures on bad luck or circumstances, and others' failures on their incompetence. "I failed because the teacher has it out for me! They failed because they were lazy and didn't study."

— estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples. This leads to using Hasty Generalizations and anecdotal evidence in arguments.

For instance, consider the 2014 Ebola outbreak in West Africa. This received widespread attention on the news. Ebola is a scary condition with no real treatment. It was a real problem for those African nations with little resources to identify and quarantine the problem in a timely and effective manner. The news reported on the thousands of Africans who died of the disease. A few people were diagnosed in the United states after traveling from Africa. People, including doctors, heard about it constantly. As a result, when patients presented to their doctor at that time with a fever, the consideration of Ebola as a cause of the fever was perceived by some as being more likely than other illnesses (such as influenza) which were orders of magnitude more likely than Ebola. The availability of the idea of Ebola via the heavy news exposure caused many patients and health care providers weigh Ebola's likelihood higher than it actually was.

— a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true").

The antivaccine movement is an example of this. Fear of vaccines was stoked by a fraudulent paper published and then retracted from The Lancet. Soon, prominent media figures were talking about the dangers of vaccines. Groups were organized to spread the word which then received further media attention. Even though study after study refuted the notion of vaccine danger, antivaccine hysteria grew and was very available to new parents, causing many to refuse vaccines.

— the tendency to do (or believe) things because many other people do (or believe) the same. Related to group-think and herd behaviour. See the Soloman Asch Conformity Study.

In the 1990's, it was suspected that Vitamin E may decrease the risk of heart attack due to its antioxidant effect. The idea was plausible. Investigative studies were begun to see if this were true. It became common for doctors to recommend Vitamin E to their patients because it seemed reasonable. The practice became widespread and promoted on T.V. To this day, many patients come to their doctor's office with Vitamin E on their medication list. However, the studies did not support the hype. Two decades and tens of thousands of patients studied resulted in no demonstrable benefit of Vitamin E. There was potential for harm, however, as it was associated with an increased risk of prostate cancer. To this day, many practitioners still seem to be recommending Vitamin E and it is still heavily promoted by vitamin companies.

Bandwagons tend to pick up speed and may be difficult to jump off without getting hurt.

In their 1999 paper, Dunning and Kruger demonstrated this effect.

In medicine, we often see this when individuals with little experience in scientific research become convinced of a proposition (such as the dangers of vaccines and genetically modified organisms, or the virtues of 'organic foods') because they "did their research" by typing these terms into an internet search engine or read about them on social media.

It is difficult, even for experienced researchers, to sift through the noise of information and critically interpret the available data when researching a particular proposition. Those with the least experience may have no idea how wrong they are and lack the skills to understand why. On the other hand, the most experienced researchers may have learned that evidence is always tentative. They may also take for granted the difficult skill set of evaluating data. The most knowledgeable and skilled researchers may underestimate their abilities.

-- the tendency for unskilled individuals to overestimate their abilities, and paradoxically the tendency for skilled individuals to underestimate their abilities. Incompetence prevents one from recognizing incompetence. As skill level rises, one becomes more aware of his/her own incompetence.

— the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations.

This is really intertwined with biases discussed above (Confirmation Bias, Selection Bias, Publication Bias)

— the tendency for people to overestimate the degree to which others agree with them.

People may become overconfident in their opinions if they think that "everyone knows it". In medicine, a doctor may get into trouble by assuming that everyone else involved with a case agrees with his/her diagnosis and plan.


— the tendency of people to perform or perceive differently when they know that they are being observed.

A potential systematic error in medical research comes from the Hawthorne effect. People behave differently when they are aware that they are being studied. If one wishes to study the effects of a drug on smoking cessation, participants would be more likely to quit than non-participants simply because they know that they are being watched, whether the drug works or not. To compensate for this, participants must be randomized in a double blind fashion to a treatment group and a placebo group. Only the net difference, if any, between the effects of the two groups can be attributed to the drug.

— Common tendency to adopt the opinions and follow the behaviors of the majority to feel safer and to avoid conflict.

A possible explanation for the proliferation of Integrative Medicine departments within academic medical institutions is the Herd instinct. We tend not to scrutinize propositions when we perceive that many of our peer are participating. Many medical institutions have created these departments that allow implausible, "alternative" practices to thrive under their otherwise science-based roofs. There may be other reasons, such as maintaining competition with other institutions, but it may be the Herd instinct that prevents our institutions from questioning this apparent double standard.

— the tendency for people to give preferential treatment to others they perceive to be members of their own groups.

When one identifies emotionally with a group, one tends to take it personally if a member of the group is accused or threatened. In medicine, if a doctor is accused of an error or negligence, it would be expected that his/her close colleagues would dismiss the charges as false without properly considering the evidence. Conversely, if a competitor is accused of the same, one may be a bit more accepting of the claim.

— "the disutility of giving up an object is greater than the utility associated with acquiring it". (see also sunk cost effects and Endowment effect).

Many of us would rather "keep playing" rather than "cut losses" when gambling. We cannot stand losing things, even if it means continuing in a risky behavior to try to get back our losses. This is easily seen at any casino. Many simply won't walk away when losing because of loss aversion.

In medicine, one may continue with a given treatment plan, even though it may not be going well rather than admitting that the plan is not working and trying something else. Loss of "being right" is probably one of the hardest losses to admit (see Cognitive Dissonance).

— the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Which is worse? Actively doing something that results in harm? Or, passively letting harm happen even though one may have been able to intervene. Many would judge that is worse to actively cause harm rather than to passively allow harm to happen even though the end result may be the same.

Omission bias is often demonstrated with the famous "Trolley Problem".

— the tendency to unconsciously assume that others share the same or similar thoughts, beliefs, values, or positions.

This is related to the False Consensus bias (see above).

— the tendency to weigh recent events more than earlier events (see also peak-end rule).

This is very similar to the availability heuristic. A doctor may have had success recently with a particular treatment in a particular patient. Even if this strategy has not been the best the majority of the time, either in past experience or in research, the doctor may assign this treatment greater value than warranted simply because of the recent success.

— the tendency for people to rate past events more positively than they had actually rated them when the event occurred.

— the tendency to engage in behaviors that elicit results which will (consciously or not) confirm our beliefs.

— the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias).

— the tendency to reject new evidence that contradicts an established paradigm.

It has been shown in recent randomized controlled trials, that arthroscopic surgery for osteoarthritis of the knee is no more effective than good attentive care. Yet the practice continues. The established paradigm of arthroscopic surgery holds that it works. The new evidence to the contrary has been rejected by many outspoken orthopedic surgeons, none of which have done randomized trials to the contrary.

In a way, this may be prudent at first. Extraordinary claims require extraordinary evidence. If a claim is made that contradicts an established paradigm, then the evidence should be impressive. The evidence should be scrutinized in peer review and replicated. However, if the evidence for such a claim meets its burden of proof, then we should let the established paradigm go. However, many have a hard time doing this (for various reasons such as the other biases described here and the concept of Cognitive Dissonance).

— expecting a member of a group to have certain characteristics without having actual information about that individual.


-- our natural tendency to over-weigh the virtues and qualities of survivors, while discounting the non-survivors.

Consider a conversation at a local barber shop between a customer and his barber. The customer was telling the old barber about his grandfather. Apparently, the man’s grandfather worked in a mine most of his life. He apparently smoked and drank throughout his life as well. His grandfather lived to be 98 years old. “It just goes to show ya’ that none of that stuff about smokin’ and drinkin’ matters. It’s all about the hard work that keeps you healthy”, said the man proudly about his grandfather’s longevity.

We tend to remember the exceptions to rules and falsely attribute their qualities as examples of why the rule is not true. We also fail to see the typical examples, such as people who died from smoking related diseases such as lung cancer, COPD and heart attacks, because those examples are gone.


— the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable.


-- bias leading to distortion of research-based information in the service of what may be perceived as righteous ends.

White Hat Bias was coined by Cope and Allison in 2010 when looking at research papers published on the topic of obesity risk factors. They found discrepancies in the reporting of studies that were favorable to practices that seemed to have an air of righteousness over other practices. For example, they found publication bias (see above) in studies comparing breast feeding to formula feeding. Studies favoring breast feeding tended to be published more than studies with less favorable results, presumably because breast feeding seems more righteous than formula feeding.


— the formation of beliefs and the making of decisions according to what is pleasing to imagine instead of by appeal to evidence or rationality.

— preference for reducing a small risk to zero over a greater reduction in a larger risk.

Many parents would prefer not to vaccinate their children because of the perceived risk of vaccine injury. Although such risks are extremely small (or virtually zero in terms of risk of autism), many would prefer to avoid vaccines and these small risks. In reality, infectious diseases pose a large risk to the public and vaccines reduce those risks significantly. However they do not reduce those risks to zero. Many would prefer the zero-risk of unlikely vaccine injury rather than the much larger risk reduction offered by vaccines.

See also the Nirvana fallacy.

Conclusion

Cognitive Biases distort our view of the world. As with cognitive dissonance, they are an integral part of our psychology. If we do not recognize them in our thinking, we will be hopelessly prone to error. Our patients cannot afford having doctors who are unaware of their own potential biases.

John Byrne, M.D.