Weird Beliefs
In this course, we have developed a disciplined method for evaluating causal claims. We know how to test whether something works. We know how to establish correlation, control for placebo effects, eliminate third causes, and proportion belief to evidence. Yet despite having reliable methods for evaluating causation, large numbers of people believe in treatments, theories, and forces that fail those tests.
Alternative medical remedies, pseudoscientific systems, and paranormal beliefs are widespread and economically significant. Billions of dollars are spent annually on supplements, detox regimens, energy therapies, psychic consultations, ghost investigations, and products that lack convincing scientific support. The costs are not merely financial. They include wasted time, delayed medical treatment, misplaced fear, and in some cases genuine physical harm. Time, money, and energy spent on these beliefs has an opportunity cost; we could have put those resources toward effective pursuits. And there’s an epistemic cost; sloppy, unscientific thinking in one area of our lives encourages irrationality elsewhere.
These beliefs have powerful emotional appeal. But the causal arguments offered for them fail. We know how to determine whether ginger reduces pain, whether zinc improves complexion, whether cupping speeds recovery, whether astrology predicts personality, or whether spirits interact with physical objects. In many cases, extensive investigation has been conducted and convincing evidence has not been found. “No convincing evidence in favor” is not the same as logical impossibility. It means the burden of proof has not been met. The rational attitude in such cases is skepticism or suspension of judgment—not belief.
These domains differ in important ways. Alternative medicine typically makes unsupported causal claims within biology. Pseudoscience imitates the appearance of science while evading its methodological discipline. Paranormal beliefs go further by positing entities or causal powers that operate outside or in violation of established natural laws. The categories overlap, but they represent distinct patterns of reasoning failure.
Alternative Medicine
Alternative medical remedies are treatments that claim to prevent or cure disease without credible scientific evidence or biological plausibility. They include homeopathy, Reiki, detox cleanses, magnet therapy, cupping, crystal healing, “energy frequency” devices, and a wide range of supplements marketed without rigorous testing.
These remedies make causal claims about the body. They assert that some intervention causes measurable physiological improvement. That makes them testable. If a treatment works, we should observe consistent correlations under controlled conditions. We should be able to distinguish treatment effects from placebo effects, natural recovery, regression to the mean, and random variation.
The persistence of alternative medicine is driven by predictable reasoning errors. Post hoc reasoning is common: “I took this supplement and felt better; therefore, it worked.” The improvement may reflect natural healing or unrelated lifestyle changes. The placebo effect is powerful, especially in conditions involving pain, mood, or fatigue. Illusory correlation leads people to remember successes and forget failures. Confirmation bias reinforces prior commitments. Motivated reasoning protects comforting narratives about “natural healing” or distrust of pharmaceutical companies.
The diagnostic principle is straightforward: if a treatment causes an effect, there should be a measurable difference between treatment and control groups. When well-designed studies repeatedly show no reliable difference, the causal claim lacks support. Retreating to “science can’t measure it” abandons the standards of empirical inquiry.
Let's consider a famous, debunked example. In the 1990s, practioners of "therapeutic touch," including some 40,000 nurses and caregivers in the United States claimed to be able hold their hands a few inches above a patient's body and detect and promote healing of their energy fields. Nursing organizations, hospitals, and other groups offered the practice as a treatment of illness. A 9 skeptical year old girl devised a simple scientific test for their energy field detection claim and got her analysis published in the prestigious Journal of the American Medical Association. Emily had practioners sit at a table and try to detect whether the energy field of her hand was present or absent without their view obstructed by a cardboard barrier to prevent cheating. The practioners placed both their arms, hands up, through holes in the cardboard, and Emily flipped a coin to determine whether she would hover one of her hands a few inches above their right or their left hand. They hand multiple, randomized attempts to correctly detect her "energy field." Practioners got 10-20 tries each. When the results were tabulated, the practioners did no better than random chance at guessing whether there was an energy field near their right or their left hand. We can put Emily's argument in a form similar to our astrology argument.
If therapeutic touch works, then practioners will be able to detect energy fields in blind, experimental conditions at a rate significantly better than guessing. (IP)
Practioners cannot detect energy fields in blind, experimental conditions at a rate significantly better than guessing. (EP)
_____________________________________________
Therefore, therapeutic touch doesn't work. (1, 2, MT)
(Rosa L, Rosa E, Sarner L, Barrett S. "A Close Look at Therapeutic Touch," JAMA. 1998;279(13):1005–1010.)
The therapeutic touch case isn't isolated. When the scientific method, as we've studied it in the last several chapters, is applied carefully to a wide range of alternative medical therapy claims, they fall apart. Scientifically structured investigations of acupuncture, aromatherapy, chelation therapy, chiropractic, herbalism, homeopathy, iridology, massage therapy, reflexology, spiritual healing, and remote, blind, intercessationary prayer have undermined their claims to real, medical efficacy. (Ernst, E. "The role of complementary and alternative medicine," BMJ. 2000 Nov 4;321(7269):1133–1135). Nevertheless, Americans still spend $30 billion a year on alternative medical therapies.
What does it show when an attempt to test an alternative medical claim fails to find evidence in support of the claim? Does it prove that the alternative medical claim is false? Or that all alternative medical remedies are worthless? The short answer is, no. Strictly speaking, a failure to find evidence in support of a claim doesn't prove that the claim is false; absence of evidence isn't evidence of absence. But the repeated failure of a whole class of these medical claims suggests a pattern and a policy. At the heart of these claims there typically lurks some entity, force, or mechanism that is outside or runs contrary to the general medical picture of reality. That's why they are labeled "alternative" medical therapies. Take the "energy fields" debunked by Emily's study. Not only can they not be detected by any of the practioners, they don't fit within our scientific consensus worldview about physics, chemistry, biology, and medicine. Those "energy fields," if they were real, would upend the modern science's picture of the world. Is science always right? No, science, as we've seen is defeasible, and when it gets something wrong, one of its virtues is changing. But extraordinary claims require extraordinary evidence.
The failure of a wide range of these alternative medical therapy claims suggests a policy of significant skepticism. The rational attitude toward unproven alternative medical claims from the outset should be calibrated skepticism. If strong, controlled, replicable evidence emerges, belief should shift. Until then, anecdote and marketing shouldn't be enough to get our money, and our assent.
Pseudoscience
Pseudoscience refers to systems of belief that adopt the appearance of scientific authority and they typically posit some entities or forces that inhabit the real world, but they insulate themselves from genuine empirical testing, replication, and disconfirmation. They use the language of science—data, charts, technical jargon—but resist the methodological discipline that defines science. It looks and sounds like science, but it doesn't accept evidence that would prove it wrong.
Examples include astrology, Chi, chakras, quantum healing, spiritual energy, flat earth theories, certain conspiracy theories, and various fringe scientific claims that mimic science's appraoch while avoiding peer review and falsifiability.
We developed a powerful argument against believing in astrology in earlier chapters. Astrology claims that celestial positions influence personality and life outcomes. If this were true, we would expect measurable correlations between zodiac signs and behavioral traits. Repeated investigation has failed to uncover such patterns. So, as we've seen, we should doubt that astrology works.
Furthermore, often, when confronted with this absence of evidence, believers retreat to unfalsifiable explanations: “Science can’t measure spiritual energy.” That move insulates the claim from testing rather than strengthening it.
Conspiracy theories operate similarly. They interpret complex events as the product of coordinated hidden control despite weak evidence. They rely heavily on confirmation bias, selective data mining, motivated reasoning, and single-cause explanations. Lack of proof becomes proof of concealment. Illusory correlations are treated as meaningful design. Texas Sharpshooter reasoning highlights coincidental patterns after the fact while ignoring disconfirming data.
The defining feature of pseudoscience is methodological resistance. Real science welcomes disconfirmation and revises in light of new evidence. Pseudoscience explains away failed predictions, reframes contrary evidence as persecution, and protects the theory at all costs.
The Paranormal
Paranormal beliefs are claims that posit entities, forces, or causal mechanisms that operate outside, transcend, or directly violate established natural laws. They assert supernatural or contra-natural causation—ghosts interacting with matter, telepathy transmitting thoughts without physical mediation, psychokinesis moving objects without force, prophetic dreams revealing future events, or spirits communicating across physical boundaries.
Unlike alternative medicine, which makes unsupported biological claims within natural systems, or pseudoscience, which imitates science while avoiding its discipline, paranormal beliefs typically require revising or suspending well-established physical principles. They conflict with conservation laws, known mechanisms of causation, and our best-supported models of how matter and energy behave.
Because these claims contradict deeply confirmed background knowledge, their prior probability is extremely low. That does not make them logically impossible, but it means that extraordinarily strong and replicable evidence would be required to rationally believe them.
The reasoning errors that sustain paranormal belief are familiar. Hyperactive agency detection leads people to interpret ambiguous stimuli as intentional presence. Illusory correlation connects unrelated events. Post hoc reasoning treats sequence as causation. Confirmation bias filters ambiguous data in favor of the desired interpretation. Outcome bias and hindsight bias reinforce dramatic narratives. When controlled testing fails to produce reliable correlations—such as consistent electromagnetic anomalies in haunted locations—the absence of evidence is dismissed rather than treated as disconfirmation.
Consider the debunked example of crop circles. In the 1970 and 1980s, farmers around Southhampton, England began to find enormous, complicated patterns stamped down in their wheat fields. The patterns were remarkably regular and striking to the eye, particularly from the air. The wheat was bent over in neat, even waves to form nearly perfect circles, lines, and other shapes. The local news stations, citizens, amateur paranormal investigators, and many other people became very excited about the phenomena. People argued that the patterns had been formed by formerly unknown weather vortices, landing alien space ships, gravity field fluctuations, unusual tornadoes, and a host of other extraordinary phenomena. Over the years, patterns of increasing complexity and beauty continued to appear in fields in the region.
In 1991, two men from Southhamption, Doug Bower and Dave Chorley, publicly announced that they were in fact responsible for the crop circles that had been occurring for 15 years. While drinking glasses of stout in a local pub and discussing UFO reports which they thought were fabrications and mistakes, they dreamed up a method for making the crop circles using ropes and a board with a loop of rope for a handle. Their goal was to illustrate just how gullible people are and how eager they are to believe in paranormal phenomena.
To stamp out a circle, one of them would hold the rope at a center point while the other one held the other end and rotated in a circle. By stepping carefully, and working outward from the center, they were able to create swirling patterns that hid their tracks and seemed to be beyond any human abilities. They attached a small wire sighting gauge like a gun sight to the brim of their baseball hats and by spotting a distant landmark such as a barn or tree, they could stamp out remarkably straight lines to compliment their circles. As the years progressed, their skills improved, their patterns got more complicated. Doug and Dave were delighted when numerous paranormal researchers insisted that the patterns were far too regular, large, and elaborate to have been created by any humans. The craze caught on and people all over the world began imitating Doug and Dave’s nocturnal art projects. There is now even an annual competition in England to see who can construct the best crop circle pattern. Despite Doug and Dave’s confession, believers have still insisted that there are too many crop circles, in too many places, and that many of them are beyond human ability. The enthusiasts are reluctant to admit it, and many people still insist that aliens are responsible, but it would appear that crop circles are a hoax.
The persistence of belief in many people in the paranormal explanation is significant; even after Bower and Chorley confessed and publicly demonstrated how they made crop circles, lots of believers invested a great deal of time and effort into arguing that the crop circles still must have a paranormal explanation. Going to such lengths to salvage the paranormal explanation over the simpler, natural and scientific one indicates that the desire to believe in spooky, supernatural, unusual, or extreme causes is often more powerful than our ability to reason clearly.
Extraordinary claims require proportionate evidence. In paranormal domains, that evidential burden is rarely met.
Why These Matter
These belief systems are not marginal curiosities. They influence health decisions, political attitudes, financial behavior, and cultural norms. They flourish because human cognition is prone to seeing patterns, inferring causes quickly, defending comforting narratives, and mistaking anecdote for data.
We have spent this semester developing tools for evaluating evidence, testing causal claims, detecting fallacies, and identifying bias. The domains of alternative medicine, pseudoscience, and the paranormal are real-world applications of those tools.
The rational stance is not reflexive dismissal. It is disciplined evaluation. If there is no correlation, there is no causation. If evidence repeatedly fails under controlled testing, belief is unwarranted. If a system resists falsification, it does not meet the standards of critical inquiry.
Beliefs should track evidence. When the evidence is weak, belief should be weak. When the evidence is absent, belief should be withheld. When stronger evidence emerges, belief should update. That principle applies just as much to extraordinary claims as it does to ordinary ones.
Applications to Cases
Here are a number of scenarios where someone has a weird belief and they’ve committed one (or more) of the biases, fallacies, or errors that we’ve studied. Students will need to master the mistakes and their definitions so that they can correctly identify them in instances like these. Be prepared to analyze cases like these and label the fallacy:
After reading online claims that contrails are government “chemtrails,” an airport engineer spends hours finding photos of jet streams that look suspiciously thick. He ignores thousands of ordinary flight patterns showing nothing unusual. Every cloudy photo becomes “confirmation.” Confirmation Bias
A skeptic hears that two vaccine recipients suffered blood clots and claims “the shots are killing people.” He ignores that the clot rate among the unvaccinated population is higher and the events are statistically expected. Ignoring Base Rates
A talk show host claims that 5G towers cause COVID-19 because both spread around the same time. He never checks whether infection rates correlate with tower density or whether any mechanism could link them. No Causation Without Correlation
After a government agency mishandles a disaster, a whistleblower concludes the officials must have intended harm. Because the outcome was bad, she assumes the decision-making itself was corrupt or malicious. Outcome Bias
Looking back on a terrorist attack, a historian says, “It was obvious the government let it happen—they ignored the warnings.” In reality, dozens of similar warnings were investigated and came to nothing; the “obvious” signs were visible only in retrospect. Hindsight Bias
After a damaging scandal hits his favored political party, a supporter refuses to accept mainstream reporting. He dismisses all journalists as corrupt and instead searches for fringe websites claiming the scandal was “a deep-state setup.” His need to protect his political identity outweighs his willingness to assess evidence objectively. Motivated Reasoning
Hospital staff swear the ER gets busier during full moons. When a researcher shows years of data disproving any increase, they shrug it off: “You just don’t see what we see.” The absence of correlation is dismissed, and the imagined cause survives. No Causation Without Correlation
A television host points to similarities between pyramids in Egypt and temples in Central America, claiming “they must share alien architects.” He paints a bullseye around a few coincidental resemblances while ignoring the vast cultural and architectural differences. The “pattern” is chosen after the fact to fit a predetermined story. Texas Sharpshooter Fallacy
Learn the concepts in this chapter, then go to this custom ChatGPT agent to practice:
You have access to a custom biases-and-fallacies practice tool designed specifically for this course. Think of it as a critical-thinking gym: you diagnose the reasoning error, commit to an answer, and then get targeted feedback on why it fits (and why nearby options don’t).
This tool is designed to help you identify reasoning mistakes in alternative medicine, pseudoscience, and the paranormal. All of the biases, fallacies, and errors we've studied previously apply.
When you ask the tool to quiz you (for example: “Quiz me on biases in pseudoscience," “Quiz me on biases in the paranormal" “Give me a mixed quiz on alternative medicine, pseudoscience, and the paranormal.” and it will do one of two things:
Scenario Identification
Present a short scenario modeled on course examples.
Give you four options (A–D), all drawn from the course list.
Ask: Which bias or fallacy is being demonstrated?
Definition Identification
Present a definition (verbatim or closely paraphrased from the course material).
Give you four options (A–D), all drawn from the course list.
Ask: Which term best matches this definition?
Important: The tool is built to make you commit to a classification first, then learn from the feedback—just like an exam situation.
The feedback tells you:
whether you matched the diagnostic features correctly
why your chosen option does or does not fit the scenario/definition
what the correct answer is, and what feature makes it the best match.
How to use the tool effectively
To get the full benefit:
Practice precision over vibes. Don’t just label—identify the mechanism:
Confirmation bias = selectively seeking/remembering confirming evidence.
Train contrasts. Many wrong answers are “near misses.” Use the feedback to learn exactly what separates similar entries (for example: hindsight bias vs. outcome bias; confirmation bias vs. motivated reasoning vs. backfire effect).
In causal units, force yourself to ask:
“Do we have correlation?” “Could the direction be reversed?” “Is there a third cause?” “Is this just post hoc timing?”
Why this matters for your grade
The in-person quizzes and exams use the same structure as the practice tool:
the same course definitions,
the same named biases/fallacies/causal mistakes,
the same expectation that you can identify the reasoning pattern and distinguish it from close alternatives.
The only difference is that the AI will not be there.
Students who use the tool seriously should expect:
higher quiz scores,
more confidence diagnosing reasoning errors,
fewer “I recognized it but couldn’t explain why” moments.
This tool enforces the definitions used in this course. In other contexts, people may define or subdivide these patterns differently. For this class, you are being graded on whether you can apply these definitions correctly and consistently.