Cognitive Biases


Cognitive Biases are systematic errors that predispose one's thinking in favor of a certain viewpoint over other viewpoints. The scientific method developed, among other reasons, to counteract these biases in order to derive objective knowledge.

Biases are often result from using shortcuts in thinking called heuristics. Such shortcuts allow us to make quick decisions about otherwise complex problems by following instinctive rules of thumb. Although useful in many situations (such as rapidly determining the mood of someone you just met), heuristics can lead us astray. Many problems should be thought through with intention and deliberation. When we settle for quick heuristics in our thinking, we often end up with biases.

The National Institute of Health's website states that biases are... "prejudiced or partial viewpoints that affect someone's interpretation of a problem."
 
"In clinical investigations, a bias is any systematic factor other than the intervention of interest that affects the magnitude of (i.e., tends to increase or decrease) an observed difference in the outcomes of a treatment group and a control group."
 
"Bias diminishes the accuracy (though not necessarily the precision) of an observation."
http://www.nlm.nih.gov/nichsr/hta101/ta101014.html
 
Psychologists have identified many cognitive biases. These are at work in our everyday thinking. Skeptical doctors would benefit from familiarizing themselves with them. The list below is not complete, but are important samples for understanding the sources of errors in our thinking. The exhaustive list can be found on the Cognitive Biases Wiki page.

Three important biases for skeptical doctors are Confirmation Bias, Selection Bias and Publication Bias.


Confirmation Bias


** Confirmation bias is the tendency to search for or interpret information in a way that confirms one's preconceptions, while ignoring information that does not support the preconceptions. We discussed this in the What is Science section. It is natural to seek only data that supports our ideas. People become emotionally attached to ideas. We experience cognitive dissonance when faced with evidence that suggests that our ideas are wrong, which is an unpleasant experience. Many avoid the possibility of being wrong by only searching for evidence that they are correct. In other words, they forget about Popper's important concept of falsification.

Confirmation bias is the fuel for volumes of claims. A single example is the claim that emergency rooms are busier on nights with a full moon. Many hold this claim as common knowledge and swear to the truth of it from personal experience. However, this claim is not supported by evidence. In fact, evidence contradicts the claim. Similarly, there is no measurable increase in suicides during a full moon.
So, the claim turns out to be a myth (let's call it the 'Full Moon Myth'). But why is it so prevalent in pop culture?

Once an idea such as the Full Moon Myth becomes known, many who work in emergency rooms will tend to take special notice of events occurring during nights with full moons. Emergency rooms are busy places in which "crazy" things happen all of the time. Proponents of the myth may go to work with the thought, "Oh no! There's a full moon tonight. Things are going to be crazy." Such a thought would prime the believer to remember the events occurring on the full moon nights, while forgetting the events on other 'common' nights. 

Doctors can get into trouble by relying on certain treatments with which they had a few prior 'successes'. A practitioner may have once recommended a controversial (and potentially dangerous) drug to a patient with certain symptoms. Perhaps that particular patient's symptoms subsided for other reasons, and the patient gave the physician high praise for a job well done. The doctor may continue to recommend this treatment to other patients with similar symptoms. Perhaps a few of them begin to feel better and give the doctor similar praise. The doctor is likely to remember these cases with pride, all the while forgetting the patients that reported no benefit or significant side-effects.

The phrase "remembering the hits and forgetting the misses" describes the essence of the confirmation bias. Scientific researchers should employ 'blinding' to reduce the confirmation bias. If data is collected by researchers without knowledge of the presence or absence of the variable in question, the researchers would have no reason to place more significance of one piece of data over another. 

Skeptical doctors should be cautious not to rely on memorable anecdotes over science. 

A related concept is Selective Exposure Theory consisting of:
 
            Selective perception - the tendency for expectations to affect perception.
                                                - if confronting unsympathetic material, people do not perceive it, or make it fit their pre-existing opinion
 

            Selective exposure - people keep away from communication of opposite hue.

 

                         and,


             Selective retention - people simply forget the unsympathetic material.



Selection Bias


** Selection bias is a distortion of evidence or data that arises from the way that the data is collected or the way that samples are selected to study. This is particularly important in medical studies. 

For instance, if someone wants to study the effect of a diet 'supplement' on fitness levels, they may choose to advertise the need for study participants in a fitness journal. Perhaps 100 volunteers willingly sign up for the study, enthusiastic about trying the new supplement. The study may even match them to 100 other controls consisting of students at the researcher's university. Such a scenario automatically biases the study towards positive results. This is because the two groups are not comparable. One could have predicted that the study group would show increased fitness levels as compared to the control group. The study group consists of self-selected fitness enthusiasts. If allowed, the results would be useless.

This sounds rather obvious, however numerous practices have claimed support from studies that are plagued with selection bias.

Scientific researchers should employ 'randomization' to reduce the selection bias. By randomly selecting study groups and control groups from the same pool of subjects, researchers are less likely to conduct biased studies.


Publication Bias


** Publication Bias is a bias on the part of scientific journal editors and publishers, in which they are more likely to publish studies with positive results over those with negative results. Positive results are thought to be more likely to attract readership and sell journals. Positive studies are more likely to catch the eye of the mainstream media. Studies that show no effect or negative effects of the idea in question are not as interesting, or so it is thought. 

Interestingly, John P. A. Loannidis pointed out in Why Most Published Research Findings are False, "in most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

So, not only are factors such as confirmation and selection biases contributing to potentially false results, those false positive results may be more likely to be published over true negative results. A 2000 BMJ study looked at 48 review studies from Cochrane and states, "26 (54%) of reviews had missing studies and in 10 the number missing was significant." Thus, the problem is very prevalent in medical science. A very helpful tool to find this is the "funnel plot" (see Detecting Publication Bias). 

Publication bias can be thought of as a form of selection bias on the part of publishers. It falsely presents data to the scientific community that is biased toward an effect, rather than presenting all of the available data that, when taken as a whole, may show little or no effect at all. Recently, some responsible journal editors have called for researchers to publicly register studies and their designs before conducting the studies. This would allow the scientific community access to all of the relevant research in a particular area. 

This is related to the 'File Drawer Problem', in which researchers tend to let negative studies sit in the 'file drawer' while actively perusing a positive study that will confirm their idea and have a greater chance of being published.

 


Other Cognitive Biases


Below are items selected from the Wikipedia Cognitive Bias page.

Availability heuristic 

— estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples. This leads to using Hasty Generalizations and anecdotal evidence in arguments.
 

Availability cascade 

— a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true").  

Bandwagon effect 

— the tendency to do (or believe) things because many other people do (or believe) the same. Related to group-think and herd behaviour. See the Soloman Asch Conformity Study.


Dunning-Kruger effect


-- the tendency for unskilled individuals to overestimate their abilities, and paradoxically the tendency for skilled individuals to underestimate their abilities. Incompetence prevents one from recognizing incompetence. As skill level rises, one becomes more aware of his/her own incompetence.
 
 

Experimenter's or Expectation bias 

— the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations.
  

False consensus effect 

— the tendency for people to overestimate the degree to which others agree with them.
 

Hawthorne effect 

— the tendency of people to perform or perceive differently when they know that they are being observed.
 

Herd instinct 

— Common tendency to adopt the opinions and follow the behaviors of the majority to feel safer and to avoid conflict.
 

In-group bias 

— the tendency for people to give preferential treatment to others they perceive to be members of their own groups.
 

Loss aversion 

— "the disutility of giving up an object is greater than the utility associated with acquiring it".   (see also sunk cost effects and Endowment effect).
  

Omission bias 

— the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).
 

Projection bias 

— the tendency to unconsciously assume that others share the same or similar thoughts, beliefs, values, or positions.
  

Recency effect 

— the tendency to weigh recent events more than earlier events (see also peak-end rule). 
 

Rosy retrospection 

— the tendency to rate past events more positively than they had actually rated them when the event occurred.

Self-fulfilling prophecy 

— the tendency to engage in behaviors that elicit results which will (consciously or not) confirm our beliefs.
  

Self-serving bias 

— the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias). 

Semmelweis reflex 

— the tendency to reject new evidence that contradicts an established paradigm.
 

Stereotyping 

— expecting a member of a group to have certain characteristics without having actual information about that individual.

Survivorship bias

-- our natural tendency to over-weigh the virtues and qualities of survivors, while discounting the non-survivors
 

Trait ascription bias

— the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable.

White Hat Bias

-- bias leading to distortion of research-based information in the service of what may be perceived as righteous ends.
 

Wishful thinking 

— the formation of beliefs and the making of decisions according to what is pleasing to imagine instead of by appeal to evidence or rationality.
 

Zero-risk bias 

— preference for reducing a small risk to zero over a greater reduction in a larger risk.


Conclusion


Cognitive Biases distort our view of the world. As with cognitive dissonance, they are an integral part of our psychology. If we do not recognize them in our thinking, we will be hopelessly prone to error. Our patients cannot afford having doctors who are unaware of their own potential biases.


John Byrne,  M.D.
 
 
 
 
 
 

Subpages (1): Survivorship Bias