Here is the full list of biases, fallacies, and mistakes in reasoning for our course.
I. General Biases and Fallacies:
Confirmation Bias
Motivated Reasoning
Gambler’s Fallacy
Optimism Bias
Planning Fallacy
Sunk Cost Fallacy
Placebo Effect
Texas Sharpshooter Fallacy
Availability Bias
Backfire Effect (Entrenchment)
Affirming the Consequent
Denying the Antecedent
Negativity Bias
Hindsight Bias
Outcome Bias
Hyperactive Agency Detection Device (HADD)
II. Widespread Weird Beliefs, Biases, and Fallacies:
Conspiracy Theories
Astrology
Superstitions
Alternative Medical Remedies
Pseudoscience and Paranormal Beliefs
III. Causal Mistakes:
Causation Defined
No Causation Without Correlation
Correlation Does Not Imply Causation
Ignoring Regression to the Mean
Confusing Cause and Effect
Missing a Third Cause
Accidental or Meaningless Correlations (Spurious Correlations)
Illusory Correlation
Overactive Causal Theorizing
Single Cause Mistake
Post Hoc Ergo Propter Hoc
Ignoring Base Rates (Statistical Reasoning Errors)
General Biases and Fallacies:
Confirmation bias: is the mistake of looking for evidence that confirms a favored conclusion while neglecting or ignoring evidence that would disprove it. Susan reads her horoscope and it tells her that Virgos are outgoing. She believes in astrology, and she searches her memory for times when she’s been outgoing. She thinks of a few cases where it seemed to be accurate and concludes that astrology works. Juan thinks that he has prescient dreams, or dreams that tell the future. Out of the thousands and thousands of dreams he’s had over the years, he easily recalls the one or two times that he dreamt about some event and then the next day it seemed to happen. He fails to notice that the vast majority of his dreams did not work out like this. Those thousands of other dreams are easily forgotten and neglected.
The Diet Blogger Tina runs a wellness blog and swears coffee helps people stay thin. She saves every study showing caffeine boosts metabolism and skips the ones linking it to stress hormones or overeating. When readers cite contradictory evidence, she dismisses it as “poorly designed.” Her next post declares, “Coffee proven to aid weight loss,” citing only her favorite studies.
The Investor Mark is certain cryptocurrency will surge again. He follows social media accounts predicting a rebound and ignores economists warning of structural decline. When a friend shares a report showing falling market fundamentals, Mark calls it “fearmongering.” He keeps searching until he finds articles saying, “Experts see huge upside ahead,” and takes those as proof he’s right.
The Voter Emma believes her candidate is incorruptible. When news breaks about secret donations, she skips the investigative reports and reads opinion pieces from outlets she already trusts. She reposts headlines like “Baseless Attacks on a Great Leader” without looking at the evidence. “They’re just jealous,” she tells friends who ask if she’s read the original documents.
The Skeptic Dan insists ghosts are impossible. When friends tell stories about strange noises, he recalls every proven hoax and faulty camera photo he’s ever read about. But when multiple witnesses describe the same unexplained sighting, he waves it off as “coincidence.” Later he searches online for more examples of debunked hauntings and feels satisfied that the matter is settled.
The Environmental Activist Lena is convinced that all corporations lie about sustainability. When a company releases a detailed report showing verifiable reductions in emissions, she doesn’t read it—she assumes it’s “greenwashing.” But when an anonymous blogger accuses the same company of dumping waste, she shares it instantly, saying, “Knew it.” She spends hours reading posts that reinforce her anger, never once checking whether any claim holds up.
The Health Researcher Dr. Patel believes that a plant-based diet prevents nearly all chronic illness. When a large meta-analysis shows only modest effects, he calls it “funded by Big Dairy.” But when a small, uncontrolled study reports dramatic benefits, he circulates it to colleagues as “more proof.” His data folder ends up filled with positive findings, while contradictory evidence quietly disappears from view.
Motivated Reasoning: People criticize preference inconsistent information with excessive skepticism, while lowering those critical standards for information that would corroborate favored beliefs. That is, they are more critical and demand a higher level of evidence concerning conclusions that conflict with things they already believe, and they are less thoughtful or skeptical when evidence supports their favored views. A prior held belief steers the search for and analysis of information rather than an unbiased gathering and evaluation of evidence leading to the most reasonable conclusion. Motivated reasoning may or may not commit confirmation bias as well. Motivated reasoning is reasoning that is directed at achieving a particular conclusion, no matter what the truth or the evidence indicates. Confirmation bias is a particular kind of filtering of the evidence that leads to a conclusion.
The Policy Debater Jacob supports tougher sentencing laws. When shown data suggesting long sentences don’t reduce crime, he scrutinizes every statistic, questioning the methodology, sample size, and political motives of the researchers. But when a think tank publishes a chart showing “crime drops after stricter laws,” he accepts it immediately, saying, “Finally, real evidence.” His evaluation standard shifts depending on whether the conclusion aligns with his position.
The Student Evaluating Grades Maya earns a low grade on a paper and reads her professor’s feedback with suspicion. “He probably just doesn’t like my views,” she says. When a classmate gets the same criticism, Maya calls the grading “fair and constructive.” Her analysis isn’t about evidence of fairness—it’s about preserving the belief that she deserved a better grade.
The Parent and Vaccines Tom believes vaccines cause developmental problems. He reads an article from a major medical journal debunking that claim but dismisses it as “industry propaganda.” Later, he finds a blog post written by a parent describing a supposed “vaccine injury” and shares it as “proof.” His reasoning process is guided by the conclusion he already wants to reach, not by consistent standards of evidence.
The Sports Fan During playoffs, Alicia insists the referee is biased against her team. When a call goes their way, she says, “Finally, fair officiating.” When it goes against them, she pauses the replay and insists, “Anyone can see that’s wrong!” Her standard for “clear evidence” of bias rises and falls depending on whether her team benefits.
The Romantic Partner Evan suspects his girlfriend is losing interest. When she’s slow to reply to a text, he reads it as proof. When she sends an affectionate message later, he tells himself she’s “just feeling guilty.” Every new piece of evidence gets twisted to confirm the outcome he already fears. His reasoning isn’t about discovering the truth—it’s about managing emotional discomfort.
The Political Commentator Rachel identifies as a staunch environmentalist. When a study finds that nuclear power could reduce emissions, she immediately questions the funding sources and data reliability. But when another study concludes that renewables alone are sufficient, she accepts it uncritically. Her skepticism is not evenly applied; it’s directed at defending a preferred conclusion.
Gambler’s Fallacy: With random number generators like a slot machine, rolling a dice, a roulette wheel, or selecting a card from a shuffled deck, the different trials/spins/rolls/draws are independent. That is, each pull of the slot machine arm or roll of the dice is causally separate from the others. There is no memory or record, or causal influence of one roll on the next roll. The dice don’t remember what they rolled previously. The slot machine or the lottery card you scratch off today doesn’t keep a record of what happened in previous cases. So the odds of winning or getting a particular outcome don’t increase over time. Your odds of winning the lottery or of rolling double sixes don’t improve because they haven’t come up for a while. The dice or a slot machine or other independent events in your life don’t become more likely simply because they haven’t been happening. The dice aren’t “due to win” when they’ve been losing for a while. The gambler’s fallacy is making the mistake of believing that a random, independent system has memory and is due to win because the odds must improve with each subsequent loss.
The Slot Machine Regular After losing twenty spins in a row, Carla tells herself the machine “has to pay out soon.” She doubles her bet, convinced the streak of losses means a jackpot is coming. When she loses again, she mutters, “It’s getting close now,” as if the machine were keeping score.
The Dice Player During a board game, Luis notices that no one has rolled a six for several turns. “We’re overdue,” he says, shaking the dice harder. He believes that because six hasn’t shown up recently, it’s now more likely to appear—even though each roll is independent of the last.
The Roulette Gambler At the casino, the roulette wheel lands on black nine times in a row. Jason announces, “Red’s next—it has to be.” He bets big on red, confident that the universe will “even things out.” When black hits again, he insists that just makes red “even more certain” next time.
The Lottery Buyer Maria’s been buying tickets for the same state lottery for ten years without winning. She tells her friend, “After all this time, my number has to come up eventually.” She believes her long streak of losses somehow makes a win more probable, ignoring that each draw is random.
The Coin Toss Predictor In a classroom demonstration, Professor Lee flips a coin that lands heads five times straight. Several students call “tails” for the next one, insisting “it’s due.” They reason that because heads has come up repeatedly, tails must now be more likely—forgetting that each flip is a separate 50/50 chance.
The Sports Fan
After his team loses six games in a row, Connor says, “We’re bound to win the next one—our luck’s got to turn.” He doesn’t consider the team’s injuries or weak defense; he just feels the losing streak itself guarantees reversal, as though randomness must self-correct.
The Job Applicant After being rejected from ten job applications, Nina says, “I’ve had so much bad luck lately—something’s got to go my way soon.” She believes her odds of landing the next job are higher simply because she’s failed so many times before.
The Weather Watcher It’s rained for seven straight weekends. Paul tells his family, “There’s no way it’ll rain again this Saturday—it’s due to clear up.” He assumes the weather pattern must “even out,” as if the clouds remember the past.
Optimism Bias: is the systematic tendency to overestimate the likelihood of positive outcomes and underestimate the likelihood of negative outcomes—especially for ourselves or people close to us.It’s a form of motivated reasoning: we want good things to happen, so we unconsciously distort our predictions in that direction. We tend to think we are better drivers than average, our kids are smarter than the other kids, and that outcomes will go better for us than for other people. This bias is probably evolutionarily advantageous.
The Startup Founder Ava launches a new app and projects breaking even within six months. When an advisor points out that most startups fail within two years, she replies, “Yes, but ours is different—we’ve got passion.” She overlooks market saturation, assuming her venture will beat the odds because she’s behind it.
The Commuter James leaves for work five minutes late but insists, “Traffic won’t be bad today—I’ll make it.” He tells himself this even though it’s Monday, it’s raining, and it’s rush hour. He arrives twenty minutes late and blames “unusual conditions,” not his habitual overconfidence.
The Doctor Dr. Moreno rarely double-checks medication orders, saying, “I never make those kinds of mistakes.” When a colleague suggests using a verification checklist, she shrugs it off, believing that errors happen to “other people.” Her self-assessment ignores normal human error rates.
The Parent When statistics show that most teenagers engage in risky behavior, Victor insists, “Not my son—he’s smarter than that.” He interprets warnings as applying to “other families,” maintaining the belief that his child is unusually safe, careful, and mature.
The Investor After reading about market volatility, Lena says, “Sure, some people lose money—but I’ve done my research.” She invests heavily in a single stock, convinced her outcome will be above average. When it crashes, she calls it “bad luck,” not bad forecasting.
The College Student Before finals, Marcus calculates that he needs a 95% average to earn an A. He studies half as much as planned, telling himself, “I usually do better under pressure.” He assumes that effort, stress, and luck will somehow align in his favor—just like they supposedly always do.
Planning Fallacy: is the mistake of underestimating how much time, energy, money, resources a project will take. We tend to think that our plans will go smoother, faster, and with fewer problems than they will. We fail to think about the common obstacles, typical performance, or average outcomes and assume that things will be better for us. Building bridges, estimating repairs, workload, time management are all good examples.
The Student Paper Emma estimates she’ll finish her term paper in two days. She forgets about citation formatting, editing, and her other deadlines. Four days later, she’s still writing and wonders why it took so long. She assumed everything would go perfectly and ignored typical delays.
The Home Renovation Carlos tells his spouse, “We’ll redo the kitchen in a week—it’s just paint and cabinets.” Three weeks later, the walls are half finished and the budget is blown. He planned for the best-case scenario instead of the realistic one.
The Software Team A development manager promises a new app by the end of the quarter, assuming “we’ll code fast once we get started.” They overlook testing, debugging, and review cycles. The project takes twice as long and costs double what they projected.
The Student Group Project A team of undergraduates plans to film and edit a documentary over a weekend. They forget to budget time for equipment issues, file transfers, or reshoots. By Sunday night, they’ve barely recorded half of what they need.
The Event Planner Lila organizes a charity fundraiser and schedules setup to begin two hours before guests arrive. She assumes deliveries and volunteers will all show up on time. When traffic delays the caterer and sound equipment, she panics—she’d only planned for everything to go right.
The Bridge Project A city announces that a new bridge will open in 18 months at a cost of $20 million. Three years later, it’s still unfinished and costs have tripled. Officials originally relied on optimistic estimates instead of historical data from similar projects.
Sunk Cost Fallacy is the tendency to continue investing time, money, or effort into something just because we’ve already invested in it—even when the rational choice is to stop. A sunk cost is any cost that has already been paid and cannot be recovered. Because it’s unrecoverable, it should not influence future decisions—yet people often let it do exactly that.
Movie Ticket Joe buys a $15 ticket to a terrible movie. After 30 minutes he realizes it’s awful and he thinks, “I paid for it, so I should stay and get my money’s worth. But staying doesn’t recover his $15 — it only wastes more of his time. Rationally, he should leave and do something more enjoyable.
Restaurant: You order a huge meal, get halfway through, and feel full. “I should finish it; I paid for it.” But the money is already spent — continuing only makes you uncomfortable.
Business Project Example: A company has spent $2 million developing a product, but testing shows it won’t sell. The CEO insists on finishing it: “We’ve come this far — we can’t quit now.” That’s the sunk cost fallacy at scale.
Relationship Example: Someone stays in an unhappy relationship because they’ve “already put five years into it.” But time already spent isn’t a reason to continue; it’s a reason to reevaluate.
The Graduate Student After three years of research, Maya realizes her dissertation topic is unworkable. She could switch projects and finish in a year, but says, “I’ve already spent too much time on this to quit now.” She keeps pouring effort into a dead end, letting past investment dictate future effort.
The Home Renovator Carlos has already spent $40,000 fixing up an old house when inspectors warn the foundation is failing. Starting over would cost less than repairs, but he insists, “I can’t stop now — I’ve already sunk too much into it.” He keeps spending to justify earlier expenses.
Placebo Effect: A positive or negative reaction that people have to expectations or beliefs about medical treatment. Sometimes, believing that you will get better makes people feel better. But it is the belief and their expectations, not the treatment that is responsible. Lots of bogus, ineffective remedies get credit for working when it’s just the placebo effect.
The Vitamin Drink Sofia starts drinking an expensive “immune-boosting” tonic sold online. Within a week, she says she feels “more energetic” and “less stressed.” When tested, the tonic turns out to be nothing but flavored water — but her improvement continues because she genuinely believes it works.
The Sugar Pill During a migraine study, participants are unknowingly given sugar pills. Marcus reports that his headaches are “finally under control.” When told afterward that he received no active drug, he insists, “It must have done something — I felt better right away.” His expectation produced the relief.
The “Healing” Bracelet After buying a magnetic bracelet that claims to relieve joint pain, Evelyn wears it daily and soon says her wrists ache less. She credits the bracelet, unaware that controlled studies show no measurable effect. Her belief in the device creates the improvement she feels.
The Fake Cream In a dermatology trial, one group receives a cream labeled “anti-aging formula” that is actually plain moisturizer. Those users report smoother skin and fewer wrinkles than the control group. The difference comes from their expectations, not the cream’s ingredients.
The “Natural Sleep Aid” Derek starts taking an herbal supplement he saw on TikTok to fix his insomnia. He begins sleeping better almost immediately, telling friends, “Finally, something that works.” Later, he learns the capsules contain only chamomile powder and inert fillers — no active sleep compound at all.
The Negative Version (Nocebo Effect) After hearing a friend complain that a new allergy medication causes nausea, Priya feels sick the first day she takes it. Her reaction fades when she learns she’d been given a placebo in a blind trial. Her expectation of harm created the symptoms.
Texas Sharpshooter Fallacy occurs when someone focuses on similarities or patterns that fit a preferred conclusion while ignoring all the data that don’t, particularly when the find the pattern after the fact because they are looking for it. It’s named after the joke about a Texan who fires bullets randomly at a barn, then paints a bullseye around the tightest cluster of holes, claiming to be a great marksman. In reasoning, it happens when we impose a pattern or meaning on random data after the fact — clustering coincidences, cherry-picking evidence, or highlighting only the parts that seem to fit a narrative. The mistake lies in looking for patterns after seeing the results and then pretending those patterns were predicted or meaningful all along. The time order of getting the data and then finding the pattern makes it distinct from mere confirmation bias and the other fallacies. We see it in pseudoscience, conspiracy theories, marketing claims, and even scientific studies where researchers notice “significant” correlations only after exploring enough variables.
The Disease Cluster A journalist notices five cancer cases on one street and declares the area a “hot zone.” She never checked the dozens of nearby streets with similar numbers before writing the story. The pattern was spotted after the fact and treated as proof of a cause.
The Market “Genius” An investment blogger reviews hundreds of stocks, then features the three that happened to rise sharply and calls them “predictions that came true.” He ignores the many others that fell flat. He found the cluster of successes after seeing the results and drew the target around them.
The Ghost Hunter Reviewing hours of static from a voice recorder, Darren isolates three faint clicks that seem to form words and posts them online as “proof of a spirit voice.” He sifted through thousands of random noises until he could connect a few that fit his narrative.
The Nutrition Researcher A lab tests 60 foods against 100 health outcomes. One correlation — blueberries and lower blood pressure — reaches statistical “significance.” The team publishes it as a discovery, omitting that the other 5,999 comparisons showed nothing. The “pattern” was created by chance and noticed only afterward.
The Political Commentator After an election, Alex points out that counties with more pizza restaurants mostly voted for his preferred candidate. “See? It’s cultural!” he says, never mentioning the hundreds of counties where the same didn’t hold. The pattern appeared only after searching for one that matched his conclusion.
The Conspiracy YouTuber Mara watches news clips and circles every mention of the number 33, claiming it proves “secret coordination.” She ignores the thousands of unrelated numbers and events she didn’t mark. The pattern wasn’t discovered — it was invented after seeing the data.
Availability Bias We make intuitive judgments of frequency and probability by reference to the ease with which instances of the class come to mind instead of their objective frequencies. We mistake “Is it easy to think of examples?:” for Is it probable?” In particular, we make the mistake in cases like: How common are mass shootings? How likely are you to be the victim of gun violence? How common are terrorist attacks? Is violence on the whole on the rise or decline?
The ease with which an example comes to mind is not a measure of how probable it is.
The Traveler After seeing several news stories about plane crashes, Dana decides to drive cross-country instead of flying. She tells friends, “Air travel just feels unsafe lately.” She ignores the statistics showing that driving is vastly more dangerous, because crashes are less vivid in memory than televised disasters.
The Homeowner Greg buys expensive flood insurance right after seeing footage of a hurricane on the news, even though he lives far inland. The images of devastation are so vivid that he overestimates his own risk, thinking, “It could happen here next.”
The Parent After hearing about a child abduction case on social media, Angela refuses to let her kids walk to school alone. She can easily recall that story—but not the millions of uneventful days for other children. The salience of one dramatic example shapes her risk perception.
The Investor Raj remembers the one friend who made a fortune trading crypto. Those stories come readily to mind, while the many silent failures don’t. Believing success is common, he empties his savings into digital coins, mistaking memorable anecdotes for reliable evidence.
The Voter After watching repeated coverage of violent protests, Marisol concludes that “the country’s falling apart.” She recalls dozens of clips of chaos but forgets that such events are rare and heavily televised. The frequency of exposure, not the true rate of violence, drives her judgment.
The Health Worrier Leo has recently read three articles about people developing rare cancers after using a common household cleaner. He immediately throws his out, convinced it’s dangerous. The stories are memorable and alarming, so he infers the risk is high—though the statistical likelihood is tiny.
The Backfire Effect (entrenchment) We often think that if we encounter contrary evidence to something we believe, then we will revise our belief accordingly, reducing our conviction in it. But in fact, the opposite often happens. Often when we encounter evidence that is contrary to a belief we hold, our conviction about that belief gets stronger. They scrutinize opposing evidence more harshly, seek counterarguments, and rationalize why the new information must be wrong. This effect is strongest for beliefs tied to identity, ideology, or moral conviction — where changing one’s mind would feel like losing part of oneself. Rather than updating beliefs, people “double down,” becoming more certain than before.
The Political Supporter When shown video evidence that her preferred candidate lied during a debate, Ava spends the evening watching partisan clips “exposing media bias.” The next day she’s even more certain her candidate is honest — “They’re attacking him because he tells the truth.” The challenge itself deepened her loyalty.
The Anti-Vaccine Activist Liam attends a medical lecture debunking vaccine myths with data and controlled studies. He leaves angry, saying, “Those doctors are all in on it — it’s proof they’re hiding something.” The more evidence he hears against his position, the more elaborate his defense becomes.
The Conspiracy Believer After a trusted friend patiently walks her through how the moon landing was filmed and verified, Erica insists, “That’s exactly what NASA wants you to believe.” The clear refutation makes her feel cornered, so she doubles down: “Now I know they’re covering something up.”
The Religious Debater When confronted with archaeological data that contradicts a literal reading of his scripture, Daniel feels shaken for a moment, then reasserts, “That’s just another test of faith.” The new evidence doesn’t weaken his conviction — it becomes further proof of his righteousness in resisting doubt.
The Diet Enthusiast Tara’s friend shows her a large meta-analysis disproving the claims of her extreme detox diet. Instead of reconsidering, she declares, “Of course mainstream science would attack a natural cure — they can’t profit from it.” Her belief strengthens precisely because it’s challenged.
The Climate Skeptic Presented with decades of temperature data showing rising global averages, Ron insists, “That data’s obviously manipulated — the fact they’re pushing it so hard proves they’re lying.” The evidence meant to persuade him only hardens his disbelief.
Comparing the Backfire Effect, Confirmation Bias, and Motivated Reasoning
Confirmation Bias: The tendency to seek out, notice, and remember evidence that supports one’s beliefs while ignoring or discounting evidence that contradicts them. The core m is selective exposure and filtering. You simply avoid or dismiss contrary evidence — it rarely changes your belief at all. Example: Reading only news sources that agree with your political views.
Motivated Reasoning: The tendency to analyze and evaluate evidence in a way that serves a desired conclusion, rather than seeking the most accurate one. The core mechanism is uneven scrutiny. People apply high skepticism to disconfirming evidence but low skepticism to confirming evidence. You still “process” the opposing evidence, but you twist or reinterpret it so that your preferred belief comes out ahead. Example: Accepting a weak study that supports your position while dismissing strong studies that don’t.
The Backfire Effect (Entrenchment): When exposure to strong contrary evidence not only fails to weaken a belief but actually strengthens it. The core mechanism is identity defense and cognitive threat. The challenge feels like an attack, triggering counter-arguing and emotional reinforcement. You end up more convinced than before seeing the evidence. Example: A conspiracy theorist who becomes more certain of the plot after reading a thorough debunking.
In short: When we commit confirmation bias, we avoid or ignore opposing evidence. We cherry pick evidence that supports a favored hypothesis. When we commit motivated reasoning, we have our conclusion first, and then we reason deliberately to support that conclusion rather than letting the evidence guide the conclusion. And when we are guilt of the backfire effect, when we hear contrary evidence, the opposition to our belief makes us believe more strongly in defiance of the evidence.
Affirming the consequent: The mistake of endorsing an argument (as valid) of the form:
1. If P then Q.
2. Q.
__________________
3. Therefore, P.
If a person studies philosophy, then they learn critical thinking.
Jordan has learned critical thinking.
_____________________
Therefore, Jordan studied philosophy. This argument is ill-formed, not valid.
The Detective Detective Ruiz thinks, “If someone broke in through the window, there would be glass on the floor.” He sees glass scattered by the sill and concludes, “So the burglar must have come in that way.” Later, it turns out the homeowner accidentally broke the pane from the inside while moving furniture.
The Doctor Dr. Patel tells a colleague, “If a patient has strep throat, they’ll have a sore throat.” When his next patient complains of throat pain, he concludes, “That must be strep.” He skips the test—only to learn later it was an allergy, not an infection.
The Teacher Ms. Lee believes, “If a student is interested in the subject, they’ll do well on exams.” After grading, she notices Chris scored high and tells another teacher, “He must love history.” But Chris actually hates the class—he just studies obsessively to keep his GPA up.
The Neighbor Dana looks out the window and sees wet pavement. She says to her partner, “It must have rained overnight.” When asked if she checked the weather report, she shrugs: “The street’s wet, what else could it be?” Later, she notices the sprinklers are still running.
The Business Owner After sales spike, Marcus says, “If our new ad campaign worked, sales would go up—and sales are up! It worked!” He stops checking other factors, like a competitor’s price increase that sent customers his way. He confuses coincidence with confirmation.
The Friend Talia tells her roommate, “If Evan’s interested in me, he’ll text me first.”
When a text arrives that evening—asking about homework—she smiles and says, “See? He likes me.” She mistakes a friendly message for romantic interest because it fits her expected pattern.
Denying the antecedent: The mistake of endorsing an argument (as valid) of the form:
1. If P then Q.
2. ~P.
3. Therefore, ~Q.
If a student studies hard, then they will pass the exam.
This student did not study hard.
______________________________
Therefore, they will not pass the exam.
This argument is invalid. Studying hard is one way to pass, but not the only way. They might have cheated, or they might already know the material.
Consider:
If you win the lottery, then you will be rich.
Elon Musk did not win the lottery.
___________________________
Therefore, Elon Musk is not rich.
The Teacher’s Assumption “If students do the extra-credit work, they’ll get an A,” Mr. Lopez says. Later, when Maya skips the extra-credit project, he mutters, “Well, she’s not getting an A then.” He forgets that she already earned perfect scores on every test—there are other paths to the same result.
The Doctor’s Reasoning Dr. Kim tells a nurse, “If someone has diabetes, they’ll show high blood sugar.” When a patient’s test comes back normal, she concludes, “So, they can’t have diabetes.” She ignores that blood sugar fluctuates and the patient could be early-stage or on medication.
The Weather Forecaster “If the sky is cloudy, it will rain,” says a new intern at the weather center. The sky clears, so he confidently predicts, “No rain today.” That afternoon, a surprise storm rolls in from a distant front. The forecast failed because clouds weren’t the only cause of rain.
The Manager’s Logic “If we advertise online, sales will increase,” thinks the manager.
When the ad budget is cut, she says, “Then sales are definitely going to drop.” But a viral customer review boosts sales anyway—proof that her premise wasn’t the only route to success.
The Parent’s Belief “If you go to college, you’ll get a good job,” Leo’s father tells him.
When Leo decides not to attend, his father sighs, “Then you’ll never make anything of yourself.” He assumes college is the only way to a good job, ignoring other possible routes.
The Detective Detective Harris reasons, “If the suspect is guilty, his fingerprints will be on the gun.” When the lab finds no prints, she says, “Then he must be innocent.” She overlooks that gloves, wiped surfaces, or indirect involvement could still point to guilt.
Negativity bias: confronted with good and bad news, our attention and memory skews bad. We are prone to give biased attention to negative information. That is, disproportionate attention is given to bad news over equivalent good news. Evolution built our cognitive systems to be more sensitive and reactive to bad news because of the damage it can do to our survival chances, in contrast to good news.
The Employee Review During her annual evaluation, Clara’s boss praises nine aspects of her performance but notes one area needing improvement. That night, Clara replays only the criticism in her head and forgets the compliments. She leaves feeling like she’s failing.
The News Reader After watching an hour of evening news, Jamal can recall every violent story but none of the positive reports about medical breakthroughs or community projects. The grim items dominate his memory and shape his impression that “the world’s getting worse.”
The Teacher’s Grading Ms. Rivera reviews her students’ essays. Even though most papers show strong improvement, she can’t stop thinking about the three that were poorly written. The few negative examples overshadow the broader progress in her class.
The Romantic Partner During dinner, Ben compliments Emma several times but also teases her once about being late. Hours later, Emma says, “You were criticizing me all night.” The single negative remark lingers longer than all the positive ones combined.
The Manager’s Outlook After a successful product launch, 95% of customer feedback is enthusiastic, but one angry online review calls the product “overpriced junk.” The manager fixates on that comment, ignoring the hundreds of five-star ratings.
The Investor Sofia’s portfolio grows steadily for six months, then drops slightly in one bad week. She panics and decides to pull her money out of the market. The recent loss feels far more important than months of consistent gains.
Hindsight Bias The tendency to believe, after an event has occurred, that we “knew it all along.” Once we know an outcome, it feels obvious or inevitable, even though it wasn’t predictable beforehand. This bias gives a false sense of foresight and overconfidence in our reasoning. It is the illusion, after an outcome is known, that it was more predictable than it really was. People genuinely believe they “knew it all along,” even though they didn’t have that confidence beforehand. It’s a distortion of memory and foresight.
Stock Market Reaction: After a tech stock crashes, an investor says, “I knew that company was overvalued.” In reality, they didn’t act on that supposed insight beforehand.
Election Prediction: A voter claims, “It was clear she was going to win by a landslide,” though before the election, polls and conversations showed uncertainty.
Medical Diagnosis: When a patient is diagnosed with a rare illness, a friend insists, “It was obvious—it had to be that,” forgetting their earlier guesses about several other conditions.
Sports Game: After a team’s upset victory, fans say, “You could just tell they were going to win,” ignoring that they predicted the opposite before the game.
Relationship Breakup: After friends break up, someone says, “I always knew they wouldn’t last,” despite expressing surprise when the news broke.
Outcome bias: Judging a decision’s quality by its result rather than by whether the reasoning or evidence at the time was sound. A good outcome wrongly makes a risky decision seem wise; a bad outcome makes a reasonable decision seem foolish. Judging the quality of a decision by its result rather than by whether it was reasonable given the information available at the time. Good outcomes make us overrate risky choices, and bad outcomes make us condemn sound reasoning.
Medical Judgment: A surgeon follows best practice in an operation that has a 95% success rate. The patient dies, and the family blames the surgeon for a “terrible choice.”
Business Decision: A company invests in a new product after strong market testing, but an unrelated economic downturn ruins sales. The board later calls the decision “irresponsible.”
Poker Hand: A player makes the statistically correct play but loses when the opponent catches a lucky card. Other players call it “a dumb move.”
Hiring Choice: A manager selects the most qualified applicant, who later underperforms. Coworkers say, “You should have known she wasn’t right for the job.”
Public Policy: A mayor orders an evacuation during a hurricane warning that fizzles out. Residents accuse him of “overreacting,” even though he acted on the best available forecasts.
Medical Treatment: A doctor makes the statistically best choice for a patient, but the patient dies. The family insists the doctor made a “bad decision,” though most patients with that treatment survive.
Business Investment: A manager greenlights an investment after careful research. A freak market crash wipes it out. The board later claims she was reckless.
Poker Game: A player makes the mathematically optimal bet but loses due to luck. Others criticize the play, judging only by the loss.
Driving in Weather: A driver takes a safe route but gets into an accident caused by another motorist. Friends say, “You shouldn’t have gone out at all,” though that judgment depends on knowing the outcome.
Military Strategy: A general makes a reasonable tactical decision with incomplete intelligence. When the battle goes poorly, historians later call it a “blunder,” ignoring that it was rational under uncertainty.
Widespread Weird Beliefs, Biases, and Fallacies:
Conspiracy Theories A conspiracy theory is a belief that significant events or conditions are secretly caused by the coordinated actions of powerful groups, despite weak or absent evidence. Conspiracy thinking thrives on confirmation bias, pattern perception, and distrust of institutions. It treats coincidences as meaningful, rejects falsifying evidence, and relies on circular reasoning: lack of proof is itself “proof” of how well the conspirators hide their tracks. For critical thinking, conspiracy theories exemplify how cognitive biases and fallacies can create the illusion of deep understanding while blocking genuine inquiry.
The Airport Engineer (Confirmation Bias) After reading online claims that contrails are government “chemtrails,” an engineer spends hours finding photos of jet streams that look suspiciously thick. He ignores thousands of ordinary flight patterns showing nothing unusual. Every cloudy photo becomes “confirmation.”
The Statistic Skeptic (Ignoring Base Rates) A man hears that two vaccine recipients suffered blood clots and claims “the shots are killing people.” He ignores that the clot rate among the unvaccinated population is higher and the events are statistically expected.
The Documentary Viewer (Single Cause Fallacy) A film argues that a shadowy banking family secretly controls global politics, treating every crisis—wars, recessions, pandemics—as their deliberate doing. It ignores dozens of independent causes and actors.
The Talk-Show Host (No Causation Without Correlation) A host claims that 5G towers cause COVID-19 because both spread around the same time. He never checks whether infection rates correlate with tower density or whether any mechanism could link them.
The Whistleblower Believer (Outcome Bias) After a government agency mishandles a disaster, a viewer concludes the officials must have intended harm. Because the outcome was bad, she assumes the decision-making itself was corrupt or malicious.
The Historian (Hindsight Bias) Looking back on a terrorist attack, a historian says, “It was obvious the government let it happen—they ignored the warnings.” In reality, dozens of similar warnings were investigated and came to nothing; the “obvious” signs were visible only in retrospect.
The Data Miner (Illusory Correlation) A YouTuber claims that spikes in Google searches for “flu symptoms” perfectly align with stock market drops, suggesting market manipulation through viral releases. The pattern is statistical noise, but he treats it as deliberate orchestration.
The Political Partisan (Motivated Reasoning) After a damaging scandal hits his favored political party, a supporter refuses to accept mainstream reporting. He dismisses all journalists as corrupt and instead searches for fringe websites claiming the scandal was “a deep-state setup.” His need to protect his political identity outweighs his willingness to assess evidence objectively.
The Distrustful Parent (Motivated Reasoning) A mother already fearful of pharmaceutical companies reads that vaccines are safe in dozens of studies but gives more weight to a single anecdotal story about a child “injured by shots.” Her emotional distrust drives her interpretation—she accepts any evidence that confirms her fear and discredits anything that challenges it.
Astrology Astrology is the belief system claiming that the positions and movements of celestial bodies (such as planets and stars) influence human personality, behavior, and events on Earth. It presents itself as a system of prediction and personality analysis based on birth time and planetary alignment, but it lacks empirical support and does not meet the standards of scientific reasoning.
Confirmation Bias The Predictive Breakup A man remembers that his horoscope warned of “emotional turbulence” the week his relationship ended. He takes this as proof astrology works, forgetting the many weeks when predictions missed completely.
Illusory Correlation “The Argumentative Aries” A teacher notices two outspoken students are Aries and concludes the sign causes assertiveness. She ignores all the quiet Aries students in her classes who contradict the pattern.
Post Hoc Ergo Propter Hoc “The Lucky Interview” After nailing a job interview on a “favorable star day,” a woman tells friends the planets must have aligned for her success. She treats timing as causation, not coincidence.
Outcome Bias “The Accurate Prediction” A horoscope predicted “a major life change.” When a friend later moves apartments, she calls the forecast “amazing.” She never revisits the dozens of vague predictions that led nowhere.
Motivated Reasoning “The Comfort of the Cosmos” After losing her job, a believer insists it happened “because Saturn is teaching me a lesson.” The idea comforts her, so she defends it fiercely against skeptical friends, not because of evidence but because it gives meaning to bad luck.
Overactive Causal Theorizing “The Mercury Malfunction” When her phone crashes during Mercury retrograde, a student claims, “That planet always messes with technology.” She sees intentional cosmic interference in random mechanical failure.
No Causation Without Correlation “The Data Denier” Presented with studies showing no statistical link between zodiac signs and personality traits, an astrology enthusiast replies, “Science just can’t measure spiritual energy.” She rejects the absence of correlation and clings to imagined causation.
Superstitions Superstitions are beliefs that certain unrelated actions, symbols, or coincidences influence luck, success, or safety—such as walking under ladders bringing bad luck, breaking mirrors causing misfortune, or athletes wearing “lucky” gear to win games. These beliefs persist because of cognitive biases and fallacies that mistake coincidence, emotion, or selective memory for evidence of causal power. Superstition is reasoning that there is a false causal connection between the behavior, walking under a ladder, or not cutting your beard during the playoffs, and the result, having bad luck, or winning the playoffs.
Post Hoc Ergo Propter Hoc Scenario: “The Lucky Socks” A basketball player wears the same unwashed socks during a winning streak. When the team finally loses on the one night he forgets them, he’s sure that breaking the ritual caused the defeat. He mistakes sequence for causation.
Illusory Correlation “The Black Cat” After nearly tripping on a sidewalk minutes after seeing a black cat, a woman becomes convinced cats bring bad luck. She forgets the countless times she saw them and nothing happened.
Confirmation Bias “The Broken Mirror” A student accidentally breaks a mirror, then notices every small inconvenience for weeks—a flat tire, a bad grade—and interprets each as evidence of “seven years’ bad luck.” He ignores all the good days that contradict the superstition.
Overactive Causal Theorizing “The Playoff Beard” During the playoffs, a hockey player grows a beard “to keep the streak alive.” When the team wins, he credits the beard; when they lose, he decides he didn’t trim it “right.” Random outcomes are woven into an imagined causal pattern.
Outcome Bias Scenario: “The Cursed Walkway” After one worker slips while walking under a ladder, coworkers declare the spot “bad luck.” The unlucky outcome convinces them the act itself was reckless—even though it was just an accident unrelated to superstition.
Missing a Third Cause Scenario: “The Umbrella Indoors” A classmate opens an umbrella inside before an exam and later fails. Friends say, “Told you that’s bad luck!” ignoring that he stayed up all night gaming and didn’t study—the real cause of the failure.
Motivated Reasoning “The Ritual Pitcher” A pitcher insists his elaborate pre-game ritual “controls fate.” When teammates tease him, he argues that giving it up might “jinx” the team. Deep down, the routine calms his anxiety—but he interprets the comfort as causal power.
No Causation Without Correlation “The Full Moon Shift” Hospital staff swear the ER gets busier during full moons. When a researcher shows years of data disproving any increase, they shrug it off: “You just don’t see what we see.” The absence of correlation is dismissed, and the imagined cause survives.
Alternative medical remedies are treatments or health practices that claim to heal, prevent, or improve medical conditions without credible scientific evidence or biological plausibility. They include a wide range of approaches—such as homeopathy, crystal healing, energy therapies like Reiki, detox diets, magnet therapy, cupping, and many herbal “cures.” These remedies often rely on personal testimony and anecdotal success stories rather than controlled testing. They persist because they appeal to intuition, control over one’s health, distrust of institutions, and the comforting illusion that natural or ancient means must be safer or more authentic. Their popularity is sustained by reasoning errors—people confuse correlation with causation, mistake expectation for effect, and reinterpret chance recoveries as proof of efficacy.
Post Hoc Ergo Propter Hoc “The Detox Miracle” After completing a week-long “juice cleanse,” a student feels energized and claims the detox “flushed out toxins.” In reality, her improved mood comes from better sleep and skipping alcohol. She assumes feeling better after the cleanse means it caused the change, confusing sequence with causation—the classic post hoc error.
Placebo Effect “The Healing Crystals” A woman carries rose quartz “for heart health” and says her blood pressure has improved since she started. When measured, it hasn’t changed—what improved was her stress level from believing she was protected. Her expectation created genuine relief, but no physiological effect; she mistakes belief-induced comfort for medical causation.
Illusory Correlation “The Herbal Healer” A man swears an herbal tea cured his chronic headaches. He remembers every day he drank it and felt better but forgets the many times the pain eased without it. He perceives a pattern of recovery that doesn’t exist, interpreting coincidence as consistent evidence of healing.
Confirmation Bias “The Acupuncture Advocate” After a few sessions, Priya feels temporary relief from back pain and concludes acupuncture “really works.” She ignores the sessions that had no effect and dismisses studies showing no difference from sham treatments. She collects only confirming experiences, reinforcing her belief without testing it against disconfirming cases.
Motivated Reasoning “The Natural Medicine Believer” After her doctor prescribes medication, Lila insists on using “natural remedies” instead, saying, “Big Pharma just wants profit.” When the herbs don’t help, she blames “detox symptoms” rather than failure. Her reasoning isn’t about evidence; it’s about preserving a comforting belief that nature heals and corporations harm—a motive-driven defense rather than rational evaluation.
Overactive Causal Theorizing “The Energy Healer” A man feels tingling in his hands after a Reiki session and concludes the practitioner’s “energy transfer” must be real. Random sensations from relaxation are woven into an imagined causal story of healing power—the mind’s instinct to impose intention and purpose on natural bodily responses.
No Causation Without Correlation “The Homeopathic Cure” Homeopathy promises relief for seasonal allergies using highly diluted “remedies.” In controlled studies, recovery rates are identical between homeopathy and placebos—no measurable difference, no causal trace. Believers dismiss this as “science not understanding energy medicine,” clinging to a cause that leaves no correlation.
Pseudoscience and Paranormal Beliefs Pseudoscience and paranormal belief systems both claim to explain the world in ways that seem scientific or evidential but fail to meet the basic standards of critical inquiry. Pseudoscience refers to belief systems, products, or practices that imitate the appearance and language of science without following its methods—they invoke data, jargon, and authority but reject falsifiability, replication, and peer review. Pseudoscientific claims often rely on selective evidence, anecdotes, or untestable explanations. Common examples include astrology, homeopathy, “quantum healing,” ancient alien theories, and “energy frequency” diagnostics. These systems offer the emotional comfort of understanding and control while discarding the discipline of scientific skepticism.
Paranormal beliefs, by contrast, posit forces, entities, or phenomena that violate known natural laws—ghosts, telepathy, psychic powers, hauntings, prophetic dreams, and communication with the dead. These beliefs promise mystery, meaning, and agency in an uncertain world. They persist because the human mind is wired to detect patterns, infer hidden causes, and imagine agency even where none exists.
Both pseudoscience and paranormalism exploit familiar reasoning errors: confirmation bias, patternicity, motivated reasoning, the post hoc fallacy, illusory correlation, and outcome bias. They appeal to intuition, emotion, and trust in testimony, while discouraging the critical habits—control testing, replication, disconfirmation—that define actual scientific reasoning.
Confirmation Bias “The Ghost Hunter” Darren investigates a reputedly haunted house. Out of hundreds of hours of audio, he highlights five faint knocks and whispers as “proof of spirits.” The rest—hours of silence—goes unmentioned. Because he searches only for confirmation, every ambiguous sound strengthens his conviction instead of testing it.
Texas Sharpshooter Fallacy “The Psychic Prediction” A self-proclaimed psychic says, “I sense a great disaster coming soon.” When a hurricane hits weeks later, her followers declare she foresaw it. They don’t note the dozens of previous “predictions” that failed. The mere sequence of statement and event feels causal, though it’s pure coincidence—textbook Texas Sharpshooter.
Texas Sharpshooter Fallacy “The Ancient Alien Theorist” A television host points to similarities between pyramids in Egypt and temples in Central America, claiming “they must share alien architects.” He paints a bullseye around a few coincidental resemblances while ignoring the vast cultural and architectural differences. The “pattern” is chosen after the fact to fit a predetermined story.
Motivated Reasoning “The Crop Circle Researcher” A believer studies intricate wheat-field patterns and insists they’re alien messages. The geometric precision, he says, “proves intelligence.” In reality, pranksters with boards and rope made them overnight, a widespread joke on believers. Motivated reasoning, wanting to believe in aliens, led him to interpret it as aliens.
No Causation Without Correlation “The Ghost Meter” A paranormal team claims spikes in electromagnetic readings indicate ghosts. When scientists measure identical fluctuations in empty buildings and around wiring, the supposed pattern vanishes. If spirits caused EMF changes, the correlation would hold consistently—but it doesn’t. Absence of correlation refutes the causal story.
Hyperactive Agency Detection Device (HADD) “The Haunted Hotel” When doors creak and lights flicker, a guest insists, “The ghost doesn’t like me.” Random drafts and wiring issues are interpreted as intentional communication. HADD—the mind’s overactive agency detector—creates the eerie sense of a conscious presence where none exists.
Causal mistakes:
Causation defined: C causes E among P means: A cause is an event or state that, had it not been present, the probability of the effect would have been lower. An effect depends upon a cause to occur, although other causes can bring it about. A cause raises the probability of the effect. Correlation is necessary but not sufficient for a causal relationship. The definition is probabilistic and counter-factual. C causes E does not mean that every time C happens, E follows, or most of the time C happens, E follows, or some of the time when C happens, E follows. or merely that C is correlated with E.
No Causation Without Correlation If two variables are not statistically correlated—if changes in one do not systematically vary with changes in the other—then one cannot be said to cause the other. Correlation is a necessary (though not sufficient) condition for causation: without an observed association, any claim of causal connection is groundless. The failure to check for correlation leads people to invent or assume causes where none exist.
Stockbroker: An investor claims that wearing a particular tie on trading days improves his performance, yet his winning and losing days show no consistent pattern.
Astrologer: A friend insists Sagittariuses are less loyal in relationships, but personality data show no relationship between zodiac signs and fidelity.
Teacher: A teacher concludes that students seated on the left side of the room learn faster, though grades show no trend by seating position.
CEO: A CEO believes that starting meetings with a joke boosts profits, though quarterly data show no link between humor and earnings.
Health Blogger: A blogger claims blue light from screens causes weight gain, though large-scale studies show no association between exposure and BMI.
Placebo Effect: The placebo effect occurs when improvement in health or well-being results from the belief that one is receiving effective treatment rather than from the treatment itself. The expectation of healing produces real psychological or physiological responses—such as pain relief or mood elevation—that can be mistaken for medical causation. Recognizing this effect is essential to separating genuine drug efficacy from belief-induced outcomes.
Pediatrician: Parents give their child sugar pills labeled “vitamins” and report fewer tantrums, convinced the “supplement” works wonders.
Runner: An athlete drinks a “performance booster” marketed online; feeling more confident, she runs her best race, attributing it to the product.
Back Pain Patient: A man receives a fake acupuncture session with retractable needles and reports significant pain relief.
Sleep Study: Volunteers told they took a sleep aid report deeper rest, though they actually received inert capsules.
Elderly Care Home: Residents claim that “magnetic bracelets” reduced arthritis pain, though controlled tests show no physical effect beyond expectation.
Ignoring Regression to the Mean In any process with natural variability, extreme outcomes tend to be followed by more moderate ones purely by chance. When people interpret this statistical tendency as evidence of a causal explanation—“the coach’s speech fixed the slump” or “the curse caused the decline”—they ignore regression to the mean. It is not causation but mathematical inevitability.
Baseball Player: After a terrible month, a batter performs better the next month; fans credit his new lucky socks.
Student Grades: A student who aced one test performs closer to average on the next, and her teacher wrongly concludes she “stopped trying.”
TV Show: Athletes who appear on a magazine cover often perform worse the following season, fueling talk of a “Sports Illustrated curse.”
Stock Analyst: A mutual fund that topped the charts one year returns to average performance the next; investors blame new management.
Hospital Administrator: Patients admitted on their worst days improve after any intervention, leading doctors to overestimate treatment success.
Confusing Cause and Effect This fallacy reverses the direction of causation, assuming that because A and B are correlated, A must cause B—when in fact B may cause A. Misidentifying the direction of influence leads to false explanations and misguided interventions.
Public Health: Observers note that depressed people use social media more and claim that “social media causes depression,” though loneliness may drive both.
Sociologist: Seeing higher condom use among promiscuous people, a researcher claims “condoms cause promiscuity” rather than the reverse.
Media Critic: A columnist argues that watching violent sports makes people aggressive, though aggressive people may be drawn to such sports.
Economist: A study finds areas with more police have more crime; a pundit concludes police cause crime, ignoring that crime rates drive police presence.
Parent: A mother says her child’s anxiety causes him to fidget, but the child fidgets first, and the restlessness triggers her anxiety about him.
Missing a Third Cause Two variables can be correlated not because one causes the other, but because both are effects of a hidden third variable. When we ignore this possibility, we mistake coincidental correlation for causation. The true causal story often involves a background factor driving both events.
Athlete’s Ritual: A basketball player credits his pre-game handshake for wins, overlooking that both the handshake and victories result from team confidence.
Ice Cream and Drowning: Ice cream sales and drownings rise together in summer; temperature, not dessert, explains both.
Student Stress: Coffee consumption and poor sleep are correlated, but heavy workloads cause both.
Neighborhood Study: A city finds more playground injuries in wealthy areas and assumes wealth causes carelessness; higher access to playgrounds is the real driver.
Car Insurance: Drivers who buy premium coverage file more claims, leading analysts to blame the policy—when in fact riskier drivers both buy more coverage and crash more.
Accidental or Meaningless Correlations Sometimes two patterns line up purely by coincidence, without any causal or explanatory link between them. With large data sets, spurious correlations are statistically inevitable. The error arises when we take these random coincidences as meaningful evidence of causation.
Statistician: A chart shows that margarine consumption in Maine correlates with divorce rates; someone jokes, “Butter ruins marriages.”
Hospital Worker: Nurses claim more ER visits occur during full moons, though detailed records show no real increase.
Trivia Buff: A blogger notices murders by steam correlate with Miss America’s age and spins a conspiratorial theory.
Sports Fan: A team’s win rate happens to mirror national GDP trends; a fan insists the economy affects morale.
Diet Influencer: A YouTuber finds that avocado sales track UFO sightings and speculates about “energy frequencies.”
Overactive Causal Theorizing Humans are pattern-seeking creatures prone to inferring causes where none exist. This hyperactive agency detection—seeing intention or design in random events—produces superstition, magical thinking, and pseudoscience. The mind prefers a causal story to randomness, even when evidence is absent.
Golfer: Convinced his striped socks bring luck, he wears them to every tournament.
Nurse: A nurse believes touching patients in a certain pattern channels healing energy.
Parent: A mother attributes her child’s recovery from a cold to crystals placed under the pillow.
Hockey Player: The team grows playoff beards “to keep the streak alive,” crediting facial hair for wins.
Traveler: A passenger avoids flights on the 13th of each month to “reduce bad luck,” convinced disasters are causally patterned.
Single Cause Mistake Complex events rarely have one sufficient cause. The single cause fallacy oversimplifies by identifying one factor as “the” cause, ignoring the interaction of multiple influences. It produces polarized arguments and poor policy by overlooking causal complexity.
Political Analyst: Commentators debate why a party lost an election—“It was wokeness,” “It was inflation”—as if only one factor mattered.
Teacher: A student fails a class, and the instructor blames “laziness,” ignoring illness, family stress, and poor teaching materials.
Public Health: Officials blame obesity solely on personal choice, ignoring genetics, environment, and economic conditions.
Economist: A recession is attributed entirely to interest rates, disregarding global trade and consumer debt.
Historian: A war is said to have occurred “because of nationalism,” omitting alliances, resource competition, and leadership decisions.
Post Hoc Ergo Propter Hoc Latin for “after this, therefore because of this,” this fallacy assumes that because one event follows another, the first must have caused the second. It confuses temporal sequence with causation, a staple error in superstition and pseudoscience.
Baseball Fan: After wearing his lucky cap to a victory, a fan insists the hat caused the win.
Student: A student studies with a new playlist and aces the exam, concluding the music “boosted intelligence.”
Homeowner: After hanging a charm, household arguments stop; she credits the charm rather than changing circumstances.
Politician: A mayor claims his economic plan “created jobs” because employment rose soon after he took office, ignoring prior national trends.
Parent: A child gets better after drinking herbal tea, and the parent believes the tea cured the illness that would have resolved naturally.
No Causation Without Correlation Correlation is necessary but not sufficient for causation. If two variables are truly related by cause and effect, their values will vary together in a measurable way. When no correlation exists—when the supposed cause leaves no statistical trace—it’s implausible to say it brings about the effect. For example, if cell phone users and non-users get cancer at the same rate, cell phone use isn’t causing cancer. A real cause must produce an observable pattern in the data.
Cell Phones and Brain Cancer For years, people claimed radiation from cell phones causes brain tumors. Researchers tracked millions of phone users over two decades, comparing tumor rates among heavy users, light users, and non-users. The rates were statistically identical across all groups. No correlation, no causation—if electromagnetic radiation caused tumors, higher use would predict higher incidence.
Vitamin C and the Common Cold A supplement company insists that taking 1,000 mg of vitamin C daily “prevents colds.” Medical researchers ran double-blind studies on thousands of participants for several winters. Those taking vitamin C caught colds at virtually the same rate as those taking placebos. No measurable difference means no causal prevention effect.
Full Moons and Crime People often claim “crazy things happen during a full moon.” Criminologists analyzed decades of police data comparing full-moon nights to others. Crime rates, arrests, and emergency calls were statistically identical. No correlation → no causation—if lunar phases influenced behavior, rates would fluctuate with the moon, but they don’t.
Ice Cream Sales and Shark Attacks During summer, ice-cream sales and shark attacks both rise sharply. The two variables are correlated, but the shared cause is warmer weather bringing more people to beaches. This example shows that correlation can exist without causation—a necessary but not sufficient condition.
Vaccines and Autism Anti-vaccine activists claimed vaccines cause autism. Dozens of large-scale studies compared vaccinated and unvaccinated children. Autism rates were identical in both groups—no correlation whatsoever. The absence of correlation directly refutes the causal claim.
Stretching Before Exercise and Injury Prevention Coaches long claimed that pre-workout stretching prevents injuries. Controlled studies measured injury rates among athletes who stretch and those who don’t. The rates were the same, and performance sometimes declined. No correlation means stretching isn’t the protective cause people assumed.
Acupuncture and Cancer Cure Claims Alternative healers claim acupuncture “cures cancer.” When remission and survival rates are compared between acupuncture users and non-users, outcomes are indistinguishable. No correlation, no causation—if acupuncture cured cancer, those patients would survive longer, but they don’t.
Correlation does not imply causation. A correlation is a consistent statistical association between two variables — they rise and fall together (positive correlation) or in opposite directions (negative correlation). But a mere correlation doesn’t prove one causes the other. Two variables can move together for many reasons that have nothing to do with causation.
Ice Cream and Drowning Researchers notice that ice cream sales and drowning deaths both increase sharply in the summer. Someone claims, “Ice cream causes drowning.” In fact, a third variable — hot weather — causes both: it drives people to buy ice cream and to swim more often. The correlation is real but non-causal.
Police Sirens and Crime Cities with more police sirens have higher crime rates. A politician concludes, “Police presence causes crime.” In reality, the direction is reversed — crime causes increased police response, not the other way around. The correlation runs in the opposite direction.
Shoe Size and Reading Ability Data show that children with larger shoe sizes read better. A superficial analysis says, “Big feet cause literacy.” The hidden variable is age — older kids both read better and have bigger feet. The correlation exists but isn’t causal.
Shark Attacks and Global Temperature A tabloid claims that rising ocean temperatures are “provoking sharks to attack humans more often.” But closer study shows that shark attacks correlate better with increased beach attendance — more people in the water, not angrier sharks. The apparent causal link is spurious.
Churches and Crime Rates Some towns with more churches also report more crime. Someone argues, “Religion increases crime.” The confounder is population size — larger towns have more of everything: more churches and more crime. The correlation reflects scale, not moral decay.
Stork Population and Birth Rates In rural Europe, data once showed a correlation between stork populations and human birth rates. Folk humor suggested, “Storks really do bring babies.” Both variables were actually linked to rurality — as villages urbanized, both storks and large families declined. The shared trend, not storks, explained the correlation.
Eating Breakfast and Academic Performance Students who eat breakfast tend to have higher grades. Marketers claim, “Breakfast causes intelligence.” But the underlying cause is often socioeconomic stability — students from well-supported homes both eat regular meals and perform better academically.
Firefighters and Fire Damage A report shows that houses with more firefighters present suffer more damage. One might wrongly infer, “Firefighters make fires worse.” In truth, bigger fires require more firefighters — the size of the fire drives both the number of responders and the damage.