We have a accumulated a long list of biases, fallacies, and mistakes in reasoning in this course. We’ve seen how people make mistakes in reasoning at very different levels in the search for evidence, the assessment of information, belief formation, structuring of arguments, reacting to arguments, and so on. Now we can give some focused effort to thinking about these failures of critical thinking, giving them labels, and identifying them in real cases.
A formal fallacy and a cognitive bias are different kinds of failure, and they operate at different levels of analysis. Students often run them together because both involve reasoning going wrong, but the distinction matters if they are going to diagnose problems precisely rather than gesture at them.
A formal fallacy is a defect in the logical structure of an argument. The error is entirely a matter of form: even if the premises are true, the conclusion does not follow. Affirming the consequent is a familiar example for us: An argument of the form “If P, then Q; Q; therefore P” is invalid, regardless of what P and Q stand for, who is making the argument, or whether the conclusion happens to be true by accident. Formal fallacies are content-independent and normatively decisive. Once the form is shown to be invalid, the argument fails as an argument. The argument might be weak for other reasons, but a serious mistake in the logical structure like this disqualifies it from convincing a rational person of anything.
A cognitive bias, by contrast, is a systematic tendency in human reasoning. Biases describe how people actually form beliefs, search for evidence, interpret information, and update their views. Confirmation bias, for example, is the tendency to seek out evidence that supports what one already believes, to discount disconfirming evidence, and to read ambiguous cases in a favorable way. This is not an error in logical form. It is a pattern in human cognition discovered by empirical psychology. Biases come in degrees, vary across individuals and contexts, and are evaluated descriptively before they are evaluated normatively.
The key distinction is this: formal fallacies are properties of arguments; biases are properties of reasoners. Logic evaluates whether an inference is valid. Psychology explains why a particular person might find an invalid inference compelling. Confusing these two leads students to make category mistakes—such as saying an argument is invalid because the speaker is biased, or thinking that being “open-minded” guarantees logical correctness.
The relationship between the two is causal, not definitional. Cognitive biases often lead people to commit formal fallacies, but they do not constitute those fallacies. A person might affirm the consequent because confirmation bias pushes them to interpret evidence in a way that supports their prior belief. The bias explains why the mistake occurs. The fallacy explains what is wrong with the reasoning. They answer different questions.
A simple rule helps keep things straight: first evaluate the argument’s form and evidential support. Only after that ask why the argument might be persuasive to someone. The first task belongs to logic; the second to psychology. Keeping these domains separate gives students the conceptual tools they need to explain, clearly and accurately, what has gone wrong when a piece of reasoning fails.
Here is the full list of biases, fallacies, and mistakes in reasoning for our course.
I. Formal Fallacies:
Affirming the Consequent
Denying the Antecedent
II. Cognitive Biases
Confirmation Bias
Motivated Reasoning
Gambler’s Fallacy
Optimism Bias
Planning Fallacy
Sunk Cost Fallacy
Placebo Effect
Texas Sharpshooter Fallacy
Availability Bias
Backfire Effect (Entrenchment)
Negativity Bias
Hindsight Bias
Outcome Bias
Hyperactive Agency Detection Device (HADD)
III. Causal Mistakes:
Causation Defined
No Causation Without Correlation
Correlation Does Not Imply Causation
Ignoring Regression to the Mean
Confusing Cause and Effect
Missing a Third Cause
Accidental or Meaningless Correlations (Spurious Correlations)
Overactive Causal Theorizing
Single Cause Mistake
Post Hoc Ergo Propter Hoc
I. Formal Fallacies
Affirming the consequent: The mistake of endorsing an argument (as valid) of the form:
1. If P then Q.
2. Q.
__________________
3. Therefore, P.
If a person studies philosophy, then they learn critical thinking.
Jordan has learned critical thinking.
_____________________
Therefore, Jordan studied philosophy. This argument is ill-formed, not valid.
The Detective Detective Ruiz thinks, “If someone broke in through the window, there would be glass on the floor.” He sees glass scattered by the sill and concludes, “So the burglar must have come in that way.” Later, it turns out the homeowner accidentally broke the pane from the inside while moving furniture.
The Doctor Dr. Patel tells a colleague, “If a patient has strep throat, they’ll have a sore throat.” When his next patient complains of throat pain, he concludes, “That must be strep.” He skips the test—only to learn later it was an allergy, not an infection.
The Teacher Ms. Lee believes, “If a student is interested in the subject, they’ll do well on exams.” After grading, she notices Chris scored high and tells another teacher, “He must love history.” But Chris actually hates the class—he just studies obsessively to keep his GPA up.
The Neighbor Dana looks out the window and sees wet pavement. She says to her partner, “It must have rained overnight.” When asked if she checked the weather report, she shrugs: “The street’s wet, what else could it be?” Later, she notices the sprinklers are still running.
Denying the antecedent: The mistake of endorsing an argument (as valid) of the form:
1. If P then Q.
2. ~P.
_________________
3. Therefore, ~Q.
If a student studies hard, then they will pass the exam.
This student did not study hard.
______________________________
Therefore, they will not pass the exam.
This argument is invalid. Studying hard is one way to pass, but not the only way. They might have cheated, or they might already know the material.
Consider:
If you win the lottery, then you will be rich.
Elon Musk did not win the lottery.
___________________________
Therefore, Elon Musk is not rich.
The Teacher’s Assumption “If students do the extra-credit work, they’ll get an A,” Mr. Lopez says. Later, when Maya skips the extra-credit project, he mutters, “Well, she’s not getting an A then.” He forgets that she already earned perfect scores on every test—there are other paths to the same result.
The Doctor’s Reasoning Dr. Kim tells a nurse, “If someone has diabetes, they’ll show high blood sugar.” When a patient’s test comes back normal, she concludes, “So, they can’t have diabetes.” She ignores that blood sugar fluctuates and the patient could be early-stage or on medication.
The Weather Forecaster “If the sky is cloudy, it will rain,” says a new intern at the weather center. The sky clears, so he confidently predicts, “No rain today.” That afternoon, a surprise storm rolls in from a distant front. The forecast failed because clouds weren’t the only cause of rain.
The Manager’s Logic “If we advertise online, sales will increase,” thinks the manager. When the ad budget is cut, she says, “Then sales are definitely going to drop.” But a viral customer review boosts sales anyway—proof that her premise wasn’t the only route to success.
II. Cognitive Biases
Confirmation bias: is the mistake of looking for evidence or favoring information that confirms or supports what you already believe, while neglecting or ignoring evidence that would disprove it. We conduct select searching for arguments and sources that back up our view, and we notice confirming details and miss disconfirming ones.
Astrology Susan reads her horoscope and it tells her that Virgos are outgoing. She believes in astrology, and she searches her memory for times when she’s been outgoing. She thinks of a few cases where it seemed to be accurate and concludes that astrology works.
Prescient Dreams Juan thinks that he has prescient dreams, or dreams that tell the future. Out of the thousands and thousands of dreams he’s had over the years, he easily recalls the one or two times that he dreamt about some event and then the next day it seemed to happen. He fails to notice that the vast majority of his dreams did not work out like this. Those thousands of other dreams are easily forgotten and neglected.
The Diet Blogger Tina runs a wellness blog and swears coffee helps people stay thin. She saves every study showing caffeine boosts metabolism and skips the ones linking it to stress hormones or overeating. When readers cite contradictory evidence, she dismisses it as “poorly designed.” Her next post declares, “Coffee proven to aid weight loss,” citing only her favorite studies.
The Skeptic Dan insists ghosts are impossible. When friends tell stories about strange noises, he recalls every proven hoax and faulty camera photo he’s ever read about. But when multiple witnesses describe the same unexplained sighting, he waves it off as “coincidence.” Later he searches online for more examples of debunked hauntings and feels satisfied that the matter is settled.
Motivated Reasoning: is goal driven reasoning. People criticize preference inconsistent information with excessive skepticism, while lowering those critical standards for information that would corroborate favored beliefs. That is, they are more critical and demand a higher level of evidence concerning conclusions that conflict with things they already believe, and they are less thoughtful or skeptical when evidence supports their favored views. A prior held belief steers the search for and analysis of information rather than an unbiased gathering and evaluation of evidence leading to the most reasonable conclusion. Motivated reasoning may or may not commit confirmation bias as well. Motivated reasoning is reasoning that is directed at achieving a particular conclusion, no matter what the truth or the evidence indicates. Confirmation bias is a particular kind of filtering of the evidence that leads to a conclusion.
The Sports Fan During playoffs, Alicia insists the referee is biased against her team. When a call goes their way, she says, “Finally, fair officiating.” When it goes against them, she pauses the replay and insists, “Anyone can see that’s wrong!” Her standard for “clear evidence” of bias rises and falls depending on whether her team benefits.
Super Fan: Diana loves her K-pop band and is completely dedicated to them. When one of the members is arrested for drunk driving, she immediately denies that he did it and argues that the police must be out to get him.
The Student Evaluating Grades Maya earns a low grade on a paper and reads her professor’s feedback with suspicion. “He probably just doesn’t like my views,” she says. When a classmate gets the same criticism, Maya calls the grading “fair and constructive.” Her analysis isn’t about evidence of fairness—it’s about preserving the belief that she deserved a better grade.
The Political Commentator Rachel identifies as a staunch environmentalist. When a study finds that nuclear power could reduce emissions, she immediately questions the funding sources and data reliability. But when another study concludes that renewables alone are sufficient, she accepts it uncritically. Her skepticism is not evenly applied; it’s directed at defending a preferred conclusion.
Gambler’s Fallacy: With random number generators like a slot machine, rolling a dice, a roulette wheel, or selecting a card from a shuffled deck, the different trials/spins/rolls/draws are independent. That is, each pull of the slot machine arm or roll of the dice is causally separate from the others. There is no memory or record, or causal influence of one roll on the next roll. The dice don’t remember what they rolled previously. The slot machine or the lottery card you scratch off today doesn’t keep a record of what happened in previous cases. So the odds of winning or getting a particular outcome don’t increase over time. Your odds of winning the lottery or of rolling double sixes don’t improve because they haven’t come up for a while. The past doesn't influence the future in these cases. The dice or a slot machine or other independent events in your life don’t become more likely simply because they haven’t been happening. The dice aren’t “due to win” when they’ve been losing for a while. The gambler’s fallacy is making the mistake of believing that a random, independent system has memory and is due to win because the odds must improve with each subsequent loss. The gambler's fallacy is thinking that the past influences the future and forming that expectation in a case where it doesn't.
The Slot Machine Regular After losing twenty spins in a row, Carla tells herself the machine “has to pay out soon.” She doubles her bet, convinced the streak of losses means a jackpot is coming. When she loses again, she mutters, “It’s getting close now,” as if the machine were keeping score.
The Dice Player During a board game, Luis notices that no one has rolled a six for several turns. “We’re overdue,” he says, shaking the dice harder. He believes that because six hasn’t shown up recently, it’s now more likely to appear—even though each roll is independent of the last.
The Roulette Gambler At the casino, the roulette wheel lands on black nine times in a row. Jason announces, “Red’s next—it has to be.” He bets big on red, confident that the universe will “even things out.” When black hits again, he insists that just makes red “even more certain” next time.
The Lottery Buyer Maria’s been buying tickets for the same state lottery for ten years without winning. She tells her friend, “After all this time, my number has to come up eventually.” She believes her long streak of losses somehow makes a win more probable, ignoring that each draw is random.
Optimism Bias: is the systematic tendency to overestimate the likelihood of positive outcomes and underestimate the likelihood of negative outcomes—especially for ourselves or people close to us. For many outcomes, particularly those that reflect well on us or involve our social standing, we believe things will be better or turn out better than they will. We want good things to happen, so we unconsciously distort our predictions in that direction. We tend to think we are better drivers than average, our kids are smarter than the other kids, and that outcomes will go better for us than for other people. This bias is probably evolutionarily advantageous.
The Startup Founder Ava launches a new app and projects breaking even within six months. When an advisor points out that most startups fail within two years, she replies, “Yes, but ours is different—we’ve got passion.” She overlooks market saturation, assuming her venture will beat the odds because she’s behind it.
The Commuter James leaves for work five minutes late but insists, “Traffic won’t be bad today—I’ll make it.” He tells himself this even though it’s Monday, it’s raining, and it’s rush hour. He arrives twenty minutes late and blames “unusual conditions,” not his habitual overconfidence.
The Parent When statistics show that most teenagers engage in risky behavior, Victor insists, “Not my son—he’s smarter than that.” He interprets warnings as applying to “other families,” maintaining the belief that his child is unusually safe, careful, and mature.
The College Student Before finals, Marcus calculates that he needs a 95% average to earn an A. He studies half as much as planned, telling himself, “I usually do better under pressure.” He assumes that effort, stress, and luck will somehow align in his favor—just like they supposedly always do.
Planning Fallacy: is the mistake of underestimating how much time, energy, money, resources a project will take. We tend to think that our plans will go smoother, faster, and with fewer problems than they will. We fail to think about the common obstacles, typical performance, or average outcomes and assume that things will be better for us. Building bridges, estimating repairs, workload, time management are all good examples.
The Student Paper Emma estimates she’ll finish her term paper in two days. She forgets about citation formatting, editing, and her other deadlines. Four days later, she’s still writing and wonders why it took so long. She assumed everything would go perfectly and ignored typical delays.
The Home Renovation Carlos tells his spouse, “We’ll redo the kitchen in a week—it’s just paint and cabinets.” Three weeks later, the walls are half finished and the budget is blown. He planned for the best-case scenario instead of the realistic one.
The Software Team A development manager promises a new app by the end of the quarter, assuming “we’ll code fast once we get started.” They overlook testing, debugging, and review cycles. The project takes twice as long and costs double what they projected.
The Student Group Project A team of undergraduates plans to film and edit a documentary over a weekend. They forget to budget time for equipment issues, file transfers, or reshoots. By Sunday night, they’ve barely recorded half of what they need.
Sunk Cost Fallacy is the tendency to continue investing time, money, or effort into something just because we’ve already invested in it—even when the rational choice is to stop. A sunk cost is any cost that has already been paid and cannot be recovered. Because it’s unrecoverable, it should not influence future decisions—yet people often let it do exactly that.
Movie Ticket Joe buys a $15 ticket to a terrible movie. After 30 minutes he realizes it’s awful and he thinks, “I paid for it, so I should stay and get my money’s worth. But staying doesn’t recover his $15 — it only wastes more of his time. Rationally, he should leave and do something more enjoyable.
Restaurant: You order a huge meal, get halfway through, and feel full. “I should finish it; I paid for it.” But the money is already spent, you have to pay for the meal whether you finish it or not, continuing only makes you uncomfortable.
Relationship Example: Someone stays in an unhappy relationship because they’ve “already put five years into it.” But time already spent isn’t a reason to continue; it’s a reason to reevaluate.
The Home Renovator Carlos has already spent $40,000 fixing up an old house when inspectors warn the foundation is failing. Starting over would cost less than repairs, but he insists, “I can’t stop now — I’ve already sunk too much into it.” He keeps spending to justify earlier expenses.
Texas Sharpshooter Fallacy occurs when someone focuses on similarities or patterns that fit a preferred conclusion while ignoring all the data that don’t, particularly when the find the pattern after the fact because they are looking for it. It’s named after the joke about a Texan who fires bullets randomly at a barn, then paints a bullseye around the tightest cluster of holes, claiming to be a great marksman. In reasoning, it happens when we impose a pattern or meaning on random data after the fact — clustering coincidences, cherry-picking evidence, or highlighting only the parts that seem to fit a narrative. The mistake lies in looking for patterns after seeing the results and then pretending those patterns were predicted or meaningful all along. The time order of getting the data and then finding the pattern makes it distinct from mere confirmation bias and the other fallacies. We see it in pseudoscience, conspiracy theories, marketing claims, and even scientific studies where researchers notice “significant” correlations only after exploring enough variables.
The Disease Cluster A journalist notices five cancer cases on one street and declares the area a “hot zone.” She never checked the dozens of nearby streets with similar numbers before writing the story. The pattern was spotted after the fact and treated as proof of a cause.
The Market “Genius” An investment blogger reviews hundreds of stocks, then features the three that happened to rise sharply and calls them “predictions that came true.” He ignores the many others that fell flat. He found the cluster of successes after seeing the results and drew the target around them.
The Ghost Hunter Reviewing hours of static from a voice recorder, Darren isolates three faint clicks that seem to form words and posts them online as “proof of a spirit voice.” He sifted through thousands of random noises until he could connect a few that fit his narrative.
The Nutrition Researcher A lab tests 60 foods against 100 health outcomes. One correlation — blueberries and lower blood pressure — reaches statistical “significance.” The team publishes it as a discovery, omitting that the other 5,999 comparisons showed nothing. The “pattern” was created by chance and noticed only afterward.
Starbucks Romance: After meeting her in the Starbucks line, dating, and falling in love, Michael tells Diana, "what are the odds that with all of the people on the planet, and all the Starbucks, I'd meet you there on that exact day at that exact time? It's destiny!"
Availability Bias We make intuitive judgments of frequency and probability by reference to the ease with which instances of the class come to mind instead of their objective frequencies. We mistake “Is it easy to think of examples? In particular, we make the mistake in cases like: How common are mass shootings? How likely are you to be the victim of gun violence? How common are terrorist attacks? Is violence on the whole on the rise or decline? The ease with which an example comes to mind is not a measure of how probable it is. We could think of Algorithm Bias as a special case of availability bias: social media algorithms notice that you pay attention to a particular type of fashion video, so they feed you more. As a result, you end up thinking that the fashion is much more popular than it really is.
The Traveler After seeing several news stories about plane crashes, Dana decides to drive cross-country instead of flying. She tells friends, “Air travel just feels unsafe lately.” She ignores the statistics showing that driving is vastly more dangerous, because crashes are less vivid in memory than televised disasters.
The Homeowner Greg buys expensive flood insurance right after seeing footage of a hurricane on the news, even though he lives far inland. The images of devastation are so vivid that he overestimates his own risk, thinking, “It could happen here next.”
The Parent After hearing about a child abduction case on social media, Angela refuses to let her kids walk to school alone. She can easily recall that story—but not the millions of uneventful days for other children. The salience of one dramatic example shapes her risk perception.
The Investor Raj remembers the one friend who made a fortune trading crypto. Those stories come readily to mind, while the many silent failures don’t. Believing success is common, he empties his savings into digital coins, mistaking memorable anecdotes for reliable evidence.
The Backfire Effect (entrenchment) We often think that if we encounter contrary evidence to something we believe, then we will revise our belief accordingly, reducing our conviction in it. But in fact, the opposite often happens, particularly when we have made that belief public and evident to people around us. Often when we encounter evidence that is contrary to a belief we hold, our conviction about that belief gets stronger. They scrutinize opposing evidence more harshly, seek counterarguments, and rationalize why the new information must be wrong. This effect is strongest for beliefs tied to identity, ideology, or moral conviction — where changing one’s mind would feel like losing part of oneself. Rather than updating beliefs, people “double down,” becoming more certain than before.
The Political Supporter When shown video evidence that her preferred candidate lied during a debate, Ava spends the evening watching partisan clips “exposing media bias.” The next day she’s even more certain her candidate is honest — “They’re attacking him because he tells the truth.” The challenge itself deepened her loyalty.
The Anti-Vaccine Activist Liam attends a medical lecture debunking vaccine myths with data and controlled studies. He leaves angry, saying, “Those doctors are all in on it — it’s proof they’re hiding something.” The more evidence he hears against his position, the more elaborate his defense becomes.
The Conspiracy Believer After a trusted friend patiently walks her through how the moon landing was filmed and verified, Erica insists, “That’s exactly what NASA wants you to believe.” The clear refutation makes her feel cornered, so she doubles down: “Now I know they’re covering something up.”
The Religious Debater When confronted with archaeological data that contradicts a literal reading of his scripture, Daniel feels shaken for a moment, then reasserts, “That’s just another test of faith.” The new evidence doesn’t weaken his conviction — it becomes further proof of his righteousness in resisting doubt.
Side note: Distinguishing the Backfire Effect, Confirmation Bias, and Motivated Reasoning
Confirmation Bias: The tendency to seek out, notice, and remember evidence that supports one’s beliefs while ignoring or discounting evidence that contradicts them. The core m is selective exposure and filtering. You simply avoid or dismiss contrary evidence — it rarely changes your belief at all. Example: Reading only news sources that agree with your political views.
Motivated Reasoning: The tendency to analyze and evaluate evidence in a way that serves a desired conclusion, rather than seeking the most accurate one. The core mechanism is uneven scrutiny. People apply high skepticism to disconfirming evidence but low skepticism to confirming evidence. You still “process” the opposing evidence, but you twist or reinterpret it so that your preferred belief comes out ahead. Example: Accepting a weak study that supports your position while dismissing strong studies that don’t.
The Backfire Effect (Entrenchment): When exposure to strong contrary evidence not only fails to weaken a belief but actually strengthens it. The core mechanism is identity defense and cognitive threat. The challenge feels like an attack, triggering counter-arguing and emotional reinforcement. You end up more convinced than before seeing the evidence. Example: A conspiracy theorist who becomes more certain of the plot after reading a thorough debunking.
In short: When we commit confirmation bias, we avoid or ignore opposing evidence. We cherry pick evidence that supports a favored hypothesis. When we commit motivated reasoning, we have our conclusion first, and then we reason deliberately to support that conclusion rather than letting the evidence guide the conclusion. And when we are guilt of the backfire effect, when we hear contrary evidence, the opposition to our belief makes us believe more strongly in defiance of the evidence.
Negativity bias: confronted with good and bad news, our attention and memory skews bad. We are prone to give biased attention to negative information. That is, disproportionate attention is given to bad news over equivalent good news. Evolution built our cognitive systems to be more sensitive and reactive to bad news because of the damage it can do to our survival chances, in contrast to good news. It's often summarized with the slogan, "It's better to mistake a boulder for a bear than a bear for a boulder."
The Employee Review During her annual evaluation, Clara’s boss praises nine aspects of her performance but notes one area needing improvement. That night, Clara replays only the criticism in her head and forgets the compliments. She leaves feeling like she’s failing.
The News Reader After watching an hour of evening news, Jamal can recall every violent story but none of the positive reports about medical breakthroughs or community projects. The grim items dominate his memory and shape his impression that “the world’s getting worse.”
The Teacher’s Grading Ms. Rivera reviews her students’ essays. Even though most papers show strong improvement, she can’t stop thinking about the three that were poorly written. The few negative examples overshadow the broader progress in her class.
The Romantic Partner During dinner, Ben compliments Emma several times but also teases her once about being late. Hours later, Emma says, “You were criticizing me all night.” The single negative remark lingers longer than all the positive ones combined.
Hindsight Bias The tendency to believe, after an event has occurred, that we “knew it all along.” Once we know an outcome, it feels obvious or inevitable, even though it wasn’t predictable beforehand. This bias gives a false sense of foresight and overconfidence in our reasoning. It is the illusion, after an outcome is known, that it was more predictable than it really was. People genuinely believe they “knew it all along,” even though they didn’t have that confidence beforehand. It’s a distortion of memory and foresight.
Stock Market Reaction: After a tech stock crashes, an investor says, “I knew that company was overvalued.” In reality, they know or act on that supposed insight beforehand.
Election Prediction: A voter claims, “It was clear she was going to win by a landslide,” though before the election, polls and conversations showed uncertainty.
Medical Diagnosis: When a patient is diagnosed with a rare illness, a friend insists, “It was obvious—it had to be that,” forgetting their earlier guesses about several other conditions.
Sports Game: After a team’s upset victory, fans say, “You could just tell they were going to win,” ignoring that they predicted the opposite before the game.
Outcome bias: Judging a decision’s quality by its result rather than by whether the reasoning or evidence at the time was sound. A good outcome wrongly makes a risky decision seem wise; a bad outcome makes a reasonable decision seem foolish. Judging the quality of a decision by its result rather than by whether it was reasonable given the information available at the time. Good outcomes make us overrate risky choices, and bad outcomes make us condemn sound reasoning.
Medical Judgment: A surgeon follows best practice in an operation that has a 95% success rate. The patient dies, and the family blames the surgeon for a “terrible choice.”
Business Decision: A company invests in a new product after strong market testing, but an unrelated economic downturn ruins sales. The board later calls the decision “irresponsible.”
Poker Hand: A player makes the statistically correct play but loses when the opponent catches a lucky card. Other players call it “a dumb move.”
Hiring Choice: A manager selects the most qualified applicant, who later underperforms. Coworkers say, “You should have known she wasn’t right for the job.”
III. Causal mistakes:
Causation defined: C causes E among P means: A cause is an event or state that, had it not been present, the probability of the effect would have been lower. An effect depends upon a cause to occur, although other causes can bring it about. A cause raises the probability of the effect. Correlation is necessary but not sufficient for a causal relationship. The definition is probabilistic and counter-factual. C causes E does not mean that every time C happens, E follows, or most of the time C happens, E follows, or some of the time when C happens, E follows. or merely that C is correlated with E.
No Causation Without Correlation If two variables are not statistically correlated—if changes in one do not systematically vary with changes in the other—then one cannot be said to cause the other. Correlation is a necessary (though not sufficient) condition for causation: without an observed association, any claim of causal connection is groundless. The failure to check for correlation leads people to invent or assume causes where none exist.
Stockbroker: An investor claims that wearing a particular tie on trading days improves his performance, yet his winning and losing days show no consistent pattern.
Astrologer: A friend insists Sagittariuses are less loyal in relationships, but personality data show no relationship between zodiac signs and fidelity.
Teacher: A teacher concludes that students seated on the left side of the room learn faster, though grades show no trend by seating position.
CEO: A CEO believes that starting meetings with a joke boosts profits, though quarterly data show no link between humor and earnings.
Placebo Effect: A positive or negative reaction that people have to expectations or beliefs about medical treatment. Sometimes, believing that you will get better makes people feel better. But it is the belief and their expectations, not the treatment that is responsible. Lots of bogus, ineffective remedies get credit for working when it’s just the placebo effect. The expectation of healing produces real psychological or physiological responses—such as pain relief or mood elevation—that can be mistaken for medical causation. Recognizing this effect is essential to separating genuine drug efficacy from belief-induced outcomes.
The Vitamin Drink Sofia starts drinking an expensive “immune-boosting” tonic sold online. Within a week, she says she feels “more energetic” and “less stressed.” When tested, the tonic turns out to be nothing but flavored water — but her improvement continues because she genuinely believes it works.
The Sugar Pill During a migraine study, participants are unknowingly given sugar pills. Marcus reports that his headaches are “finally under control.” When told afterward that he received no active drug, he insists, “It must have done something — I felt better right away.” His expectation produced the relief.
The “Healing” Bracelet After buying a magnetic bracelet that claims to relieve joint pain, Evelyn wears it daily and soon says her wrists ache less. She credits the bracelet, unaware that controlled studies show no measurable effect. Her belief in the device creates the improvement she feels.
The Fake Cream In a dermatology trial, one group receives a cream labeled “anti-aging formula” that is actually plain moisturizer. Those users report smoother skin and fewer wrinkles than the control group. The difference comes from their expectations, not the cream’s ingredients.
Ignoring Regression to the Mean In any process with natural variability, extreme outcomes tend to be followed by more moderate ones purely by chance. When people interpret this statistical tendency as evidence of a causal explanation—“the coach’s speech fixed the slump” or “the curse caused the decline”—they ignore regression to the mean. It is not causation but mathematical inevitability.
Baseball Player: After a terrible month, a batter performs better the next month; fans credit his new lucky socks.
Student Grades: A student who aced one test performs closer to average on the next, and her teacher wrongly concludes she “stopped trying.”
TV Show: Athletes who appear on a magazine cover often perform worse the following season, fueling talk of a “Sports Illustrated curse.”
Stock Analyst: A mutual fund that topped the charts one year returns to average performance the next; investors blame new management.
Hospital Administrator: Patients admitted on their worst days improve after any intervention, leading doctors to overestimate treatment success.
Confusing Cause and Effect This fallacy reverses the direction of causation, assuming that because A and B are correlated, A must cause B—when in fact B may cause A. Misidentifying the direction of influence leads to false explanations and misguided interventions.
Public Health: Observers note that depressed people use social media more and claim that “social media causes depression,” though loneliness may drive both.
Sociologist: Seeing higher condom use among promiscuous people, a researcher claims “condoms cause promiscuity” rather than the reverse.
Media Critic: A columnist argues that watching violent sports makes people aggressive, though aggressive people may be drawn to such sports.
Economist: A study finds areas with more police have more crime; a pundit concludes police cause crime, ignoring that crime rates drive police presence.
Missing a Third Cause Two variables can be correlated not because one causes the other, but because both are effects of a hidden third variable. When we ignore this possibility, we mistake coincidental correlation for causation. The true causal story often involves a background factor driving both events.
Athlete’s Ritual: A basketball player credits his pre-game handshake for wins, overlooking that both the handshake and victories result from team confidence.
Ice Cream and Drowning: Ice cream sales and drownings rise together in summer; temperature, not dessert, explains both.
Student Stress: Coffee consumption and poor sleep are correlated, but heavy workloads cause both.
Neighborhood Study: A city finds more playground injuries in wealthy areas and assumes wealth causes carelessness; higher access to playgrounds is the real driver.
Accidental or Meaningless Correlations Sometimes two patterns line up purely by coincidence, without any causal or explanatory link between them. With large data sets, spurious correlations are statistically inevitable. The error arises when we take these random coincidences as meaningful evidence of causation.
Statistician: A chart shows that margarine consumption in Maine correlates with divorce rates; someone jokes, "Butter ruins marriages.”
Hospital Worker: Nurses claim more ER visits occur during full moons, though detailed records show no real increase.
Trivia Buff: A blogger notices murders by steam correlate with Miss America’s age and spins a conspiratorial theory.
Sports Fan: A team’s win rate happens to mirror national GDP trends; a fan insists the economy affects morale.
Overactive Causal Theorizing Humans are pattern-seeking creatures prone to inferring causes where none exist. This hyperactive agency detection—seeing intention or design in random events—produces superstition, magical thinking, and pseudoscience. The mind prefers a causal story to randomness, even when evidence is absent.
Golfer: Convinced his striped socks bring luck, he wears them to every tournament.
Nurse: A nurse believes touching patients in a certain pattern channels healing energy.
Parent: A mother attributes her child’s recovery from a cold to crystals placed under the pillow.
Hockey Player: The team grows playoff beards “to keep the streak alive,” crediting facial hair for wins.
Traveler: A passenger avoids flights on the 13th of each month to “reduce bad luck,” convinced disasters are causally patterned.
Single Cause Mistake Complex events rarely have one sufficient cause. The single cause fallacy oversimplifies by identifying one factor as “the” cause, ignoring the interaction of multiple influences. It produces polarized arguments and poor policy by overlooking causal complexity.
Political Analyst: Commentators debate why a party lost an election—“It was wokeness,” “It was inflation”—as if only one factor mattered.
Teacher: A student fails a class, and the instructor blames “laziness,” ignoring illness, family stress, and poor teaching materials.
Public Health: Officials blame obesity solely on personal choice, ignoring genetics, environment, and economic conditions.
Historian: A war is said to have occurred “because of nationalism,” omitting alliances, resource competition, and leadership decisions.
Post Hoc Ergo Propter Hoc Latin for “after this, therefore because of this,” this fallacy assumes that because one event follows another, the first must have caused the second. It confuses temporal sequence with causation, a staple error in superstition and pseudoscience.
Baseball Fan: After wearing his lucky cap to a victory, a fan insists the hat caused the win.
Homeowner: After hanging a charm, household arguments stop; she credits the charm rather than changing circumstances.
Politician: A mayor claims his economic plan “created jobs” because employment rose soon after he took office, ignoring prior national trends.
Parent: A child gets better after drinking herbal tea, and the parent believes the tea cured the illness that would have resolved naturally.
No Causation Without Correlation Correlation is necessary but not sufficient for causation. If two variables are truly related by cause and effect, their values will vary together in a measurable way. When no correlation exists—when the supposed cause leaves no statistical trace—it’s implausible to say it brings about the effect. For example, if cell phone users and non-users get cancer at the same rate, cell phone use isn’t causing cancer. A real cause must produce an observable pattern in the data.
Cell Phones and Brain Cancer For years, people claimed radiation from cell phones causes brain tumors. Researchers tracked millions of phone users over two decades, comparing tumor rates among heavy users, light users, and non-users. The rates were statistically identical across all groups. No correlation, no causation—if electromagnetic radiation caused tumors, higher use would predict higher incidence.
Vitamin C and the Common Cold A supplement company insists that taking 1,000 mg of vitamin C daily “prevents colds.” Medical researchers ran double-blind studies on thousands of participants for several winters. Those taking vitamin C caught colds at virtually the same rate as those taking placebos. No measurable difference means no causal prevention effect.
Full Moons and Crime People often claim “crazy things happen during a full moon.” Criminologists analyzed decades of police data comparing full-moon nights to others. Crime rates, arrests, and emergency calls were statistically identical. No correlation → no causation—if lunar phases influenced behavior, rates would fluctuate with the moon, but they don’t.
Vaccines and Autism Anti-vaccine activists claimed vaccines cause autism. Dozens of large-scale studies compared vaccinated and unvaccinated children. Autism rates were identical in both groups—no correlation whatsoever. The absence of correlation directly refutes the causal claim.
Correlation does not imply causation. A correlation is a consistent statistical association between two variables — they rise and fall together (positive correlation) or in opposite directions (negative correlation). But a mere correlation doesn’t prove one causes the other. Two variables can move together for many reasons that have nothing to do with causation.
Ice Cream and Drowning Researchers notice that ice cream sales and drowning deaths both increase sharply in the summer. Someone claims, “Ice cream causes drowning.” In fact, a third variable — hot weather — causes both: it drives people to buy ice cream and to swim more often. The correlation is real but non-causal.
Police Sirens and Crime Cities with more police sirens have higher crime rates. A politician concludes, “Police presence causes crime.” In reality, the direction is reversed — crime causes increased police response, not the other way around. The correlation runs in the opposite direction.
Shoe Size and Reading Ability Data show that children with larger shoe sizes read better. A superficial analysis says, “Big feet cause literacy.” The hidden variable is age — older kids both read better and have bigger feet. The correlation exists but isn’t causal.
Firefighters and Fire Damage A report shows that houses with more firefighters present suffer more damage. One might wrongly infer, “Firefighters make fires worse.” In truth, bigger fires require more firefighters — the size of the fire drives both the number of responders and the damage.
Summary
This chapter brought together many of the main ways reasoning goes wrong. Some errors are formal fallacies, which are defects in the logical structure of an argument. Others are cognitive biases, which are systematic tendencies in human thought that distort how we search for evidence, interpret information, and form beliefs. Still others are causal mistakes, which involve misunderstanding what causation is, how it differs from correlation, and how statistical patterns can mislead us.
These categories should be kept separate. A formal fallacy is a flaw in an argument. A cognitive bias is a pattern in the thinker. A causal mistake is an error in explaining why something happened. Good critical thinking requires identifying exactly what kind of mistake has occurred rather than vaguely saying that a piece of reasoning is “biased” or “illogical.”
Affirming the Consequent: The fallacy of reasoning, “If P, then Q; Q; therefore P.”
Denying the Antecedent: The fallacy of reasoning, “If P, then Q; not-P; therefore not-Q.”
Confirmation Bias: The tendency to seek and favor evidence that supports existing beliefs while neglecting contrary evidence.
Motivated Reasoning: Reasoning directed toward defending a preferred conclusion rather than following the evidence fairly.
Gambler’s Fallacy: The mistaken belief that past outcomes in an independent random process affect future outcomes.
Optimism Bias: The tendency to expect better outcomes for oneself than the evidence warrants.
Planning Fallacy: The tendency to underestimate the time, cost, or difficulty of completing a project.
Sunk Cost Fallacy: The mistake of continuing a failing course of action because of resources already invested.
Placebo Effect: Improvement caused by expectation or belief rather than by the treatment itself.
Texas Sharpshooter Fallacy: Finding a pattern after the fact and treating it as if it were meaningful or predicted in advance.
Availability Bias: Judging frequency or probability by how easily examples come to mind.
Backfire Effect (Entrenchment): The tendency for contrary evidence to strengthen a prior belief rather than weaken it.
Negativity Bias: The tendency to give greater weight to negative information than to equally important positive information.
Hindsight Bias: The tendency to see an outcome as obvious or predictable only after it has occurred.
Outcome Bias: Judging a decision by its result rather than by the quality of the reasoning behind it.
Hyperactive Agency Detection Device (HADD): The tendency to detect intention or agency where none exists.
Causation Defined: A cause is something that raises the probability of an effect.
No Causation Without Correlation: If two variables are not correlated, there is no basis for claiming that one causes the other.
Correlation Does Not Imply Causation: A correlation by itself does not show that one variable causes the other.
Ignoring Regression to the Mean: Mistaking a natural return from an extreme result toward average for a causal effect.
Confusing Cause and Effect: Reversing the direction of causation by treating an effect as a cause.
Missing a Third Cause: Mistaking correlation for causation when both variables are actually caused by a third factor.
Accidental or Meaningless Correlations: Treating a chance correlation as if it showed a real causal connection.
Overactive Causal Theorizing: Inventing causal explanations without sufficient evidence.
Single Cause Mistake: Oversimplifying a complex outcome by attributing it to only one cause.
Post Hoc Ergo Propter Hoc: Assuming that because one event came before another, it caused it.
Learn the concepts in this chapter, then go to this custom ChatGPT agent to practice:
You have access to a custom biases-and-fallacies practice tool designed specifically for this course. Think of it as a critical-thinking gym: you diagnose the reasoning error, commit to an answer, and then get targeted feedback on why it fits (and why nearby options don’t).
This tool is designed to help you master three core classifications:
I. General Biases and Fallacies (e.g., confirmation bias, motivated reasoning, ...etc.)
II. Causal Mistakes (e.g., correlation vs. causation errors, missing a third cause,...etc)
When you ask the tool to quiz you (for example: “Quiz me on biases,” “Quiz me on fallacies,” “Give me a mixed quiz,” or “Test me on causal mistakes”), it will do one of two things:
Scenario Identification
Present a short scenario modeled on course examples.
Give you four options (A–D), all drawn from the course list.
Ask: Which bias or fallacy is being demonstrated?
Definition Identification
Present a definition (verbatim or closely paraphrased from the course material).
Give you four options (A–D), all drawn from the course list.
Ask: Which term best matches this definition?
Important: The tool is built to make you commit to a classification first, then learn from the feedback—just like an exam situation.
The feedback tells you:
whether you matched the diagnostic features correctly
why your chosen option does or does not fit the scenario/definition
what the correct answer is, and what feature makes it the best match.
How to use the tool effectively
To get the full benefit:
Practice precision over vibes. Don’t just label—identify the mechanism:
Confirmation bias = selectively seeking/remembering confirming evidence.
Train contrasts. Many wrong answers are “near misses.” Use the feedback to learn exactly what separates similar entries (for example: hindsight bias vs. outcome bias; confirmation bias vs. motivated reasoning vs. backfire effect).
In causal units, force yourself to ask:
“Do we have correlation?” “Could the direction be reversed?” “Is there a third cause?” “Is this just post hoc timing?”
Why this matters for your grade
The in-person quizzes and exams use the same structure as the practice tool:
the same course definitions,
the same named biases/fallacies/causal mistakes,
the same expectation that you can identify the reasoning pattern and distinguish it from close alternatives.
The only difference is that the AI will not be there.
Students who use the tool seriously should expect:
higher quiz scores,
more confidence diagnosing reasoning errors,
fewer “I recognized it but couldn’t explain why” moments.
This tool enforces the definitions used in this course. In other contexts, people may define or subdivide these patterns differently. For this class, you are being graded on whether you can apply these definitions correctly and consistently.