What is Being Rational? Being a good critical thinker means being more rational.
Being rational means making better decisions and forming better beliefs in service to your goals. Making better decisions and forming better beliefs means gathering better evidence, both in terms of quantity and quality. It means doing a better job of evaluating that evidence. It means making better logical inferences, and committing fewer reasoning mistakes. Making better decisions and forming better beliefs require committing fewer fallacies, making fewer errors in reasoning, and being less prone to biases.
We will treat being rational as a continuum, not as a simply binary property. A person or a belief isn’t merely rational or irrational. Rather, someone can be more or less rational in the way that they arrived at the believe or chose their actions. A particular belief, a conclusion, or a choice to act could be more or less rational, depending on how the person gathered and assessed their information to arrive there.
Furthermore, we can fail to be rational in many ways. Here we will focus on two ways: a person can do a better or worse job in gathering the information that leads them to draw a conclusion about what’s true; Smith might consult an astrologer for investment advice, or Smith might get a degree in stock market finance before deciding what to invest in. Or a person might fail or falter in applying the canons of deductive and inductive logic to that information in the process of determining what it implies. There are well understood rules of how to draw out logical or probabilistic inferences from some information to a conclusion. Just like you can make a mistake on your math homework correctly applying the rules to get the correct answer to a math problem, you can make a reasoning mistake in moving from the evidence to the conclusion.
Being rational does not mean merely believing true conclusions, although having well-justified, true beliefs is a goal of critical thinking. Being a good critical thinker addresses how one arrives at those conclusions. A person could accidentally get the right, true conclusion from a bad reasoning process, for example. A juror, for example, might conclude that the defendant is guilty because of their race when in fact the defendant is guilty, but their race had nothing to do with reasonable grounds for concluding their guilt.
Rationality: is the capacity for devising effective means to achieve your ends. It’s for figuring our how to accomplish your goals. Whatever goal you might have, short term or long term: you might want to get better at X, Y, or Z so that ultimately you can be happy, successful, safe, or prosperous. Having better, more rational beliefs, in the senses suggested above, facilitates your achieving your goals. Rationality, or critical reasoning, is the faculty you have for discovering or devising a path of action and the intermediate steps for achieving those goals. Our critical reasoning is poor when it fails to help us solve problems, meet challenges, and acquire our goals. Or when it does it inefficiently, and our critical reasoning is better when it finds efficient, effective, successful solutions to those ends.
Insofar as we want our thoughts, beliefs, and plans to track reality, we need to have beliefs that are true, well-justified, and that are proportional to the quality and quantity of the evidence. Very generally, “being rational” is believing that which is best supported by the evidence. Being irrational is failing to believe that which is best supported by the evidence.
What is reasoning for?
Viewed through this lens, it is natural to ask what reasoning is for from an evolutionary perspective. The most plausible general answer is that human reasoning evolved primarily to support flexible action control in complex, uncertain environments. In particular, reasoning enables offline evaluation of actions: instead of acting immediately and learning only from the consequences, we can pause, consider alternatives, and mentally explore different courses of action before committing ourselves. This involves the ability to simulate possible actions, anticipate likely causal consequences, compare options, and select among them when trial-and-error learning would be costly, slow, or dangerous. In short, reasoning is centrally a system for planning—for working out how to act effectively in situations where simple reflexes or fixed habits are not enough.
This account fits well with what we know about how natural selection shapes cognition. Evolution favors traits that improve behavioral performance—that help organisms survive, reproduce, and manage the demands of their environments. Reasoning is a relatively slow, energy-intensive, and sometimes error-prone process, so it would not be favored as a default strategy in all circumstances. Instead, it becomes adaptive in settings where environments are unpredictable, payoffs are delayed, mistakes are expensive, and simple heuristics break down. In such contexts, the ability to reason—to weigh options, foresee consequences, and choose among competing plans—can substantially improve outcomes, even if it does not guarantee perfect accuracy.
Importantly, this does not imply that reasoning is indifferent to truth. Accurate beliefs about the world are often crucial for successful action, and better representations typically support better plans. But from an evolutionary standpoint, the primary pressure is not for abstract truth-tracking as an end in itself; it is for beliefs and inferences that are good enough to guide effective action under real-world constraints. This helps explain why human reasoning is powerful but imperfect, sensitive to context, and deeply shaped by goals. As hominid evolution progressed—through increasingly complex tool use, hunting and tracking, social coordination, and long-term resource management—the demands on planning and decision-making intensified. The cognitive capacities we now call “reasoning” were gradually elaborated to meet those demands, and later extended and disciplined by language, culture, and education into the forms of critical thinking studied today.
The evolution of rationality
If we focus on what the evidence supports and avoid speculative leaps, the emergence of human reasoning appears gradual, relatively late, and cumulative, rather than the result of a single evolutionary “upgrade.” The cognitive capacities that underlie reasoning accumulated over long stretches of hominid evolution as brains became larger, more metabolically expensive, and more capable of supporting complex forms of control, memory, and planning.
Early members of the genus Homo, more than two million years ago, already show modest but significant increases in brain volume compared to earlier hominins. These increases came at a real cost: neural tissue is energetically expensive, requiring a large and stable caloric supply. The fact that brain size expanded at all indicates that the benefits outweighed those costs. Archaeological evidence from stone tools and butchery suggests improved working memory, better causal tracking, and limited forward planning. These abilities supported multi-step actions and short-term anticipation, allowing individuals to solve practical problems more effectively in their environments. Still, cognition at this stage was largely online—tightly coupled to immediate perception and action—rather than reflective or exploratory of multiple hypothetical futures.
More substantial brain expansion occurs with later hominins, such as Homo erectus and Homo heidelbergensis, between roughly 1.5 million and 300,000 years ago. Brains in this period approach or exceed 1,000 cubic centimeters, bringing increased computational capacity but also sharply increased metabolic demands. These costs make sense only if larger brains enabled new kinds of problem-solving. Tool technologies now required extended sequences of action and delayed payoffs, hunting demanded coordination over longer time horizons, and hominins occupied increasingly variable and uncertain environments. At this stage, selection plausibly favored cognitive systems capable of offline planning—mentally simulating future actions and outcomes rather than relying solely on trial-and-error. This marks the emergence of what can reasonably be called proto-reasoning.
By around 300,000 to 100,000 years ago, early Homo sapiens reach brain volumes within the modern human range. Although average brain size largely plateaus after this point, behavioral flexibility continues to increase. This suggests that the gains now come less from raw volume and more from how neural resources are organized and deployed. The archaeological record shows rapid innovation, regional technological diversity, and successful adaptation to a wide range of ecological niches. These patterns are consistent with more robust reasoning capacities: the ability to compare alternative courses of action, consider counterfactual possibilities, and represent goals and subgoals hierarchically. Reasoning here is still primarily practical—aimed at guiding action—but it is increasingly powerful and flexible.
Later cultural developments, especially during the Upper Paleolithic period beginning around 70,000 years ago, amplify these reasoning capacities without further biological change. Language, shared symbols, and social learning allow plans and strategies to be externalized, criticized, and refined across individuals and generations. Reasoning becomes collectively scaffolded, extending the reach of individual brains through communication and culture. The energetic costs of large brains now pay even greater dividends, as neural capacities are leveraged through social coordination and cumulative knowledge.
Crucially, the explicit forms of reasoning we now associate with logic, proof, and epistemology emerge only in the last few thousand years, alongside writing, mathematics, and formal education. (And our self-awareness as a species that employs these sorts of cognitive tools only starts becoming clear to us a few thousand years ago.) These developments do not correspond to increases in brain volume or new biological adaptations. Instead, they represent cultural technologies imposed on a cognitive system that evolved to support effective action under uncertainty. Larger, costlier brains made advanced reasoning possible, but they did not automatically produce formal rationality.
Across hominid evolution, then, increases in brain volume track increasing demands for planning depth, problem-solving ability, and behavioral flexibility. Neural tissue is expensive, but it confers the ability to reason more effectively about the world and to succeed within it. Modern critical thinking builds on this evolutionary foundation, refining and constraining a planning system that evolved to help our ancestors choose actions wisely in complex and risky environments.
Some Epistemological Basics for our Course
Belief: What is a belief? To believe a claim is to assent to it or to have an attitude towards it such that you think it is true. It may or may not actually be true, but to believe it is to think that it is. That is, a person can have a belief that is actually false, but to believe it is to think that it is true. So many people believed that the earth is flat, for example. Some may still believe it. They have a false belief. Someone who believes that "A mile is 5,280 feet," thinks that it is true that there are 5,280 feet in a mile, and this belief happens to actually be true. Belief is subjective because it can vary from person to person. Belief is also mind-dependent; they cannot exist without minds, like ours, to think or believe them. If there were no minds, then there would be no beliefs. The truth by contrast is objective; it doesn't vary from person to person. And it it mind-independent; there would be truths, like “2 + 2 = 4” or “The first electron shell can only hold two electrons,” whether there were beings like us with minds around or not. See the section below for more about truth.
A belief is an attitude of assent or a disposition of agreement toward a proposition. A proposition is what a sentence in a language expresses. So two people, one expressing their belief “Los solteros no estan casados,” in Spanish, and another expressing “Bachelors are unmarried,” in English, have the same belief. And a being could have a belief, an attitude that they regard something to be true even if they don’t have language. The family dog believes that you have come home when he hears the car in the driveway and get excited for you to open the door.
If there were no subjects to think them, there would be no beliefs. If there were no minds in the universe, there would be no beliefs. That is, belief is mind-dependent. Smith, for example, believes, "Americans never went to the moon." Smith also believes, "There are 12 months on a yearly calendar." Smith's first belief is false, although she takes it to be true. And Smith's second belief is true. Beliefs can align, or diverge from the objective state of affairs in the world.
Beliefs aren't true simply in virtue of being believed. The widespread conviction that the Sun orbits the Earth before Copernicus did not make that claim true. The Earth was orbiting the Sun while lots of people on Earth had an erroneous belief about reality in their heads. So claims like, "the truth is whatever everyone believes," or "it was true then that the Sun orbited the Earth" are incoherent. There have been countless cases of people, even large groups of them, believing things that were, in fact, false. In the Middle Ages, people believed falsely that the bubonic plague was caused by a conjunction of three planets in the solar system. It's also incoherent for someone to say that something is "true for me." To believe it is to think that it is true, but that should not be confused with actually being true. This distinction between and characterization of belief and truth allows us should be maintained so that we can coherently describe these situations.
Here are some more examples to help us understand the distinctions and definitions. I believe these sentences:
"2 + 2 = 4"
"The Norman invasion occurred in 1066."
"California is a state in the United States."
"Smoking causes cancer."
"The Earth is spherical."
Insofar as I believe them, I think they are true. I can't believe a claim and also think it is false. To believe it is to take it to be true. So I do not believe that "Bubonic plague is caused by demon possession," although there have been some people who believed that in the 1300s. Furthermore, I do not believe, "Scorpios make better parents than Libras," "The American moon landing was faked," and "it is possible for unassisted humans to fly if they concentrate hard enough." I think all of those claims are false. To believe a claim is to think that it is true, but the claim itself can be either true or false. That is, people can, and often do have false beliefs, even though they don't seem that way to them. I can also have beliefs that some claims are false: I believe, “It is false that the speed of light is 100 mph.” My beliefs about what is false are also beliefs, and they can be correct, or mistaken depending on the mind-independent truth outside my head.
Beliefs are subjective, or they vary from person to person, whereas objective reality is what it is no matter what we think. Beliefs are mind-dependent--they must be in a mind to exist. And to believe a claim is to take it to be true, even though it might not actually be true. Some may still believe it.
Also, to believe a claim should not be taken to mean that a person has less conviction about it, or it is doubted or weak. We will include all claims that a person would agree to, whether they are trivial or momentous, among their set of beliefs. You can have varying levels of conviction or certainty about it, but to believe is to take it to be true. So you believe that “February follows January.” And you believe that the chair you are sitting on is not made of cheese. And you believe that when the sun is shining, it is day time. But you might also believe, but with less conviction, that “My flight will be on time.”
Belief: to believe a claim is to assent to it or to have an attitude toward it such that you think it is true. Belief is subjective, mind-dependent, and may or may not be true.
What is truth? The truth is what is the case or what the actual state of affairs in the world is. The truth is the state of the real, mind-independent world. We form beliefs about it on the basis of our information. The truth is objective—it does not vary from person to person. Water is composed of H2O, no matter what humans have believed or not believed about it. Truth is mind-independent. It does not depend upon people with minds to exist or to be true. Phillip K. Dick said, “A real thing is something that doesn’t go away when you stop believing in it.” Truth is the real world. And there are many beliefs that are not true. The truth remains what it is whether we form beliefs about it or not. When people believed that the earth was flat, in fact, it was not. The truth is that the earth is spherical. The Earth orbits the Sun. Evil demon possession does not cause bubonic plague.
Furthermore, since it is mind-independent, there are truths that are not believed by anyone. There are truths about the location, speed, and mass of a rock that is floating in space around the Sun that no one will ever confront or learn, for example. Furthermore, since the truth is the real, mind-independent state of the world, claims like, "the truth is relative," or "there is no objective truth, only belief" are incoherent. Suppose Smith says, "there are no objective truths, there are only subjective beliefs." Paradoxically, his assertion presumes that what he is saying is false. He is asserting that his claim is actually, objectively true; it is objectively true that there are no objective truths. The real state of affairs in the world is that there are no real or true states of affairs. Relativism about truth is self-contradictory and incoherent.
Propositions about the truth: A sentence is true if what it expresses matches or corresponds to what is the case in the world. So, "The earth is flat." is false, but "The earth is spherical." is true. And "The earth orbits the Sun," is true, and "The earth is at the center of the universe with the Sun and planets orbiting around it," is false. We might be tempted to say that when someone believes a false claim, these sentences were "true for them." To use "truth" in this way would conflict with the definition of truth above, and it is not intelligible. What these examples make clear is that belief and truth are distinct. When a person S believes some sentence, then S takes it to be true. S assents to that sentence; S believes that it is true. But whether or not the sentence is actually true is an independent matter from whether or not S believes it. There are countless cases where we have believed a sentence (taken it to be true), but we were wrong because it was in fact false--like the flat earth case above. And there have been countless cases where no one believes something, but it is true. There is a cure for cancer out there, we presume, but no one has discovered it yet.
Believing does not make a claim true. And notice that if one person S believes that the earth is flat, and another person R believes that the earth is round, they both take different things to be true. But only one of them is right, and the other one is mistaken. What they both believe cannot be true because what each one believes rules out, or is incompatible with what the other believes. And what makes a belief actually true or false is the mind-independent state of the world itself. The truth is out there, outside the head.
Correspondence Principle (CP): A declarative sentence is true just in case it corresponds to the facts as they actually are. A declarative sentence is false just in case it fails to correspond to the facts as they actually are. So, "The Earth is spherical," is true because there is a planet in reality, the Earth, that is spherically shaped.
Three Attitudes Toward Propositions
Whenever a person considers any proposition, they must take one and only one of three attitudes toward it. That person must believe the proposition, or disbelieve the proposition, or suspend judgment about the proposition. A person cannot at any time have more than one of these attitudes toward one proposition. So you either believe, disbelieve, or suspend judgment about the proposition, "The Earth is flat." You might be able to acknowledge why someone else might see it differently, you might even be able to recognized the evidence that would lead them to believe something else. But once we have clarified a claim, and you understand it you will be in a position of either believing it, disbelieving it, or suspending judgment about it. That is, you cannot both believe and disbelieve it at the same time. To believe it is to think it is true, and to disbelieve it is to think that it is false. It's not possible to believe both that, "Bachelors are unmarried," and "Bachelors can be married," for example.
We can also see that people believe or disbelieve claims with different degrees of conviction. You might both believe that "the woman who raised me is my mother," and "The Raiders will win the next Super Bowl," but you believe the first one with more conviction, more assurance, and more confidence than the last one. You could, however, be wrong about both of them. People can also change their minds over time such that they believe a proposition at one point and then later they come to disbelieve it. I believed "Lance Armstrong is not cheating in the Tour de France," at one point, but evidence eventually led me to disbelieve it. What we believe and how much conviction we believe it with varies over time and circumstance.
We will adopt this Rational Belief Principle: In general, to be rational, our beliefs should be sensitive to the evidence and our conviction about them should be proportional to the strength of that evidence. So for a rational person, their evidence will reflect the three attitudes that they can take toward a proposition:
Believe: If a person’s evidence on the whole concerning a proposition supports that proposition, then it is rational for the person to believe the proposition.
Disbelieve: If the person’s evidence goes against the proposition, then it is rational for the person to disbelieve the proposition.
Suspend Judgment: And if the person’s evidence is neutral, then it is rational for the person to suspend judgment concerning the proposition.
Furthermore, we should acknowledge that the better your evidence, the more sure you should be, and the weaker your evidence, the weaker your conviction about a claim should be. That is, we should proportion our belief or disbelief in response to the evidence. When the quantity and quality of our evidence on the whole favors a belief, we should believe it with more conviction. When our evidence is weaker, then our credence should drop. And as our evidence against a claim accumulates, we should be more confident in disbelieving it.
We will call this the Proportional Strength of Belief Principle. It is rational to proportion the strength of one’s belief to the strength of one’s evidence. The stronger one’s evidence for a proposition is, the stronger one’s belief in it should be.
We can represent these three attitudes and the point about proportional strength as a continuum where we attach 0% probability to a claim that we disbelieve completely, and we assign 100% conviction or certainty to a claim we are quite certain about:
A rational person revises their beliefs in the light of new evidence, and that evidence could lead them to change their attitude about the proposition. If the airline announces that there is bad weather in Denver, you might change from believing to suspending judgment about your flight being on time. And when the Kings win an important game, you might adjust your view that they will be in the playoffs. A person whose beliefs are not responsive to the evidence and who doesn’t proportion the strength of their belief to ti is dogmatic. A good critical thinker avoids dogmatism.
A person's evidence can also lead them astray. They can believe things that are false and thus get the wrong conclusion, or they can make mistakes about what is implied by their evidence. We can see that given the evidence they have, what they believe makes sense, even though the truth is different than what they believe. So it was reasonable for an illiterate 14th century peasant in Europe, for example, to believe that "Bubonic plague is caused by demon possession." even though we know that it is caused by the bacterial infection Yersinia pestis. They were being reasonable given what they knew and the situation. This position has a name:
You can be well-justified in believing a false claim. Fallibilism is the view that it is possible for it to be rational or reasonable for a person to believe a proposition even though it is false. So under the right circumstances, many false claims could be quite reasonable to believe. A member of an ancient bronze age tribe, living in the wilderness, might reasonably believe that the earth is flat. A child raised in a cult with no access to the outside world or other information might come to believe, reasonably, that the cult leader is a divine being. It is important to reflect, therefore, on what has to happen to a person and with the information that they have for a belief they have to cease being reasonable. This is a surprising view; note that we have separated being rational from believing the truth. If all of someone's evidence points to some conclusion's being true, then the rational attitude for them to take is to believe that conclusion, even if it turns out that it is false.
One reason to accept this view is that the implications of rejecting it are absurd. Suppose we reject fallibilism and we assert that rational beliefs must be true. That would mean that some person in history who was thoughtful, critical, and knowledgeable, who knew everything there was to know at the time and was operating within the best standards of evidence and rationality that they had in that era and who ended up believing something that was false was irrational. Aristotle, one of the greatest minds in history, who knew as much or more than anyone in his time and place would be irrational in his belief that "the planets orbit around the Earth," was irrational. There simply was no way that Aristotle, operating with the knowledge, the tools, and the resources that were available to him could have come to believe in a heliocentric universe. And it seems absurd to conclude that Aristotle was irrational. If Aristotle was not rational, then no ever was or could be rational. We need to adopt a more useful, and practical notion of rationality.
One Truth Value Principle. Every proposition has exactly one truth value. It is either true or false, but not both. So, the proposition, "The Earth is the third planet in orbit around the Sun," is true. It cannot be both true and false at the same time in the same way. "Bubonic plague is caused by demon possession," is false, not true, even though there were people in the 1300s who believed that it was true. The proposition itself has only one truth value. No proposition can have more than one truth value.
Logical Consistency. Another foundational principle of rationality and critical thinking is the law of non-contradiction. Aristotle said, “Nothing can both be and not be, in the same respect, at the same time.” And, "an affirmation is a statement affirming something of something, a negation is a statement denying something of something…It is clear that for every affirmation there is an opposite negation, and for every negation there is an opposite affirmation…Let us call an affirmation and a negation which are opposite a contradiction." This idea is represented logically as ~(P & ~P) The tilde ~ means “not.” So this expression in formal logic says, it is not the case that both P (a proposition) and not P (the denial of that proposition) are both true.
Roughly, it is not possible for a thing to both possess and not possess a property at the same time in the same way. An assertion cannot be both true and not true at the same time. It corresponds to the One Truth Value Principle but it concerns reality, not beliefs. The idea is that a thing in the world either has a property or it does not. It is either one way or not that way, but it cannot both have a property and not have it at the same time in the same way.
The affirmation and the negation cannot both be true of an object. To make an assertion of the form X is P, like "the ball is blue," is to claim that the ball has the property of blue and that it is false that the ball is non-blue. If we abandon the law of non-contradiction, then there is no meaningful difference between an assertion and its opposite. That is, our assertions cease to have meaning altogether. My claim that, "Today is Tuesday," doesn't say anything unless it denies some other state of affairs like, "Today is Wednesday." The law of non-contradiction is axiomatic to reason; that is, it is one of the most fundamental principles upon which reasoning and rationality are based. It cannot be argued for (it is the principle that makes arguments possible) nor can in be plausibly denied (to deny it is to already assume it.)
When two sentences make assertions about the world that are incompatible, we say they are logically inconsistent.
A set of sentences is logically consistent if and only if it is possible for all of them to be true (at the same time.) The set of sentences is logically inconsistent if it is not possible for all of them to be true at the same time. When a sentence correctly describes the state of the world in some way, it asserts that the world is that way, not some other way. If it's true that Des Moines is in Iowa, then some other state of things such as “Des Moines is in Connecticut” is false. One sentence's being true rules out other descriptions that don't match up with that description. More explicitly: ~(Des Moines is in Iowa & Des Moines is not in Iowa) The law of non-contradiction, and the most fundamental law of logic rules out that Des Moines can both be in and not in Iowa.
So when we consider what one proposition asserts, there are other propositions that are incompatible with it. If “Professor McCormick is a mammal,” then it is not true that “McCormick is a reptile.” But if it is true that McCormick is a mammal, then it is also true that McCormick is a vertebrate. “McCormick is an invertebrate” is not logically consistent with “McCormick is a mammal.”
Propositions can be explicitly contradictory or implicitly contradictory. “McCormick is married,” and “McCormick is not married” is an explicitly contradictory pair of propositions because the second sentence denies exactly and explicitly what the first proposition says. But “McCormick is a mammal” and “McCormick is an invertebrate” is implicitly contradictory. We must add the additional truth, “All mammals are vertebrates” to infer that “McCormick is a vertebrate,” and that claim, which was only implicit before, now produces an explicit explicitly contradiction.
Furthermore, when we consider sets of propositions, it could be that neither one of them is true. Consistency and truth are different issues. So "aliens from Mutara all weigh more than 100 kilograms" and "Shreeek, an alien from Mutara, weighs 10 kilograms," are inconsistent, even though they are both fictions.
Test for Logical Consistency/Contradiction: One way to test for consistency, is to imagine that one of the sentences is true and then ask, "does the state of affairs described by that sentence rule out or prevent the other sentence from being true?"
So consider these sets of propositions:
1) No students passed the exam.
2) Some students passed the exam. (Logically inconsistent.)
1) The earth orbits around the Sun.
2) Pluto orbits around the Sun. (Logically consistent)
1) All critical thinkers are rational.
2) No rational things are critical thinkers. (Logically inconsistent)
1) The door is locked.
2) The store is closed. (Logically consistent)
1) If you win the lottery then you will be rich.
2) Elon Musk is rich.
3) Elon Musk did not win the lottery. (Logically consistent)
1) The earth is spherical.
2) The earth is flat. (Logically inconsistent)
1) God exists.
2) God does not exist. (Logically consistent)
1) If the door is locked, then the store is closed.
2) The door is locked.
3) The store is not closed. (Logically inconsistent)
1) If you win the lottery, then you will be rich.
2) Smith is not rich.
3) Smith won the lottery. (Logically inconsistent)
Here are a couple of cases where we will employ concepts in a stricter fashion logically than common linguistic convention. First, when we encounter a conditional sentence of the form "If P then Q" like "If you are in Reno, then you are in Nevada." we will not take it to imply that "If Q then P." That is, if you are in Nevada, that doesn't imply that you are in Reno, because there are other non-Reno ways to be in Nevada. So in general, "If P then Q," does not imply, nor is it equivalent to "If Q then P." Also, when we state that "Some As are Bs," we will not infer that to mean that "Not all As are Bs." So, "Some CSUS students are female," and "All CSUS students are female," are logically consistent. They both can be true. "Some" doesn't imply "Not all." These two points can deviate from some familiar linguistic practice, but it's useful when we're being logical and careful to tighten up our definitions and take care not to read in extra inferences that are not explicitly stated.
Say something brief about arguments here?
We will also adopt the Epistemic Culpability Principle: If a person is epistemically culpable for believing claim, then they can be blamed or faulted rationally for believing or disbelieving some claim. The notion of culpability is borrowed from ethics. When your friend knowingly lies to you, we would fault her. We would say that she has done something wrong. She ought not to have done it. We use prescriptive language--"should" and "ought"--instead of merely descriptive terms about the facts. She is morally culpable for not telling you the truth. Someone is epistemically culpable when she has violated some duty or responsibility to be reasonable, rational, or thoughtful. There is a prescriptive presumption that people ought to be reasonable or rational, so when they fail and they could have acknowledged the justified, reasonable conclusion, then we find fault in them. If someone has thought about the topic extensively, gathered evidence carefully, and correctly applied logical inferences, then we typically find him to be epistemically inculpable, or without blame, about the resulting belief. See fallibilism above.
If you found out that someone you know who otherwise seems reasonable and mentally healthy, and who has had a similar education to you, and who seems to know and believe many of the other things you believe, but they assert, and seem to believe, "The earth is flat," you'd probably find them to be epistemicallly culpable. You'd say that it's irrational for them to believe that, they shouldn't believe it, they should believe that the earth is round, they are making and mistake and they should know better. They should change this thing they believe in order to be more reasonable or rational.
What is knowledge? Classically, a person is thought to know a claim if and only if three conditions are met:
a. they believe the claim,
b. the claim is true, and
c. they have justification for believing the claim. If any of these conditions fails, then the person doesn't have knowledge. They could have a true belief without justification, or a false belief with justification, or a false belief without justification, or justification without believing, and so on. And none of these would be knowledge. Only when it is true that the earth is round, I believe it, and I have justification for believing it does it become knowledge that the earth is round.
Justification: A person S's justification for a claim p is the set of reasons, the other beliefs, or the matters that S takes to support or imply p. S can have justification for a claim and conclude that it is true, but it may still turn out to be false (Fallibilism.) But when S acquires a justification for p that S takes to be adequate to imply, prove, or indicate p, then S takes p to be true. It can also happen that S thinks that some piece of evidence is sufficient evidence to prove the truth of p, but it does not. That is, p may not be actually justified even though S thinks it is. One's justification can have varying levels of objective success in actually proving p, and this success should be considered separately from whether S takes her justification to be sufficient.
Logical Possibility is a concept that is built upon the law of non-contradiction. Things are logically possible if they do not imply a logical contradiction. That is, anything that is not logically contradictory is logically possible. More formally, any proposition whose opposite does not imply a contradiction is logically possible. Any description of a state of affairs that does not contain and implicit or explicit logical contradiction is logically possible. That is, if the sentence does not include a contradiction like, "Mike is a married bachelor." then the state of affairs that the sentence describes is logically possible. So it is logically possible that Mike is a bachelor. And it is logically possible that he is married. And it is logically possible that Mike (an unaided human) could fly. But it is not logically possible that 2 + 2 = 5, or that circles have sides, or that the Pythagorean Theorem is wrong, etc. Philosophers sometimes talk about logical possibilities in terms of possible worlds; there is a possible world where Arnold Schwarzenegger is the president of the United States. There is a possible world where you have super powers and can run faster than the speed of light. There are no colorless red balls, however, and no presidents who hold no political office, and no four sided triangles.
Logically possible:
Donuts rain from the sky.
Dinosaurs have jobs in Wall Street finance.
Superman flies faster than a speeding locomotive.
Prof. McCormick is 20 feet tall.
Logically impossible:
A cube with 4 sides.
A mother who is not a parent.
A straight line that is not the shortest distance between two points.
A number that is both even and odd.
A female brother.
Natural Possibility We find regularities or patterns in the behavior of matter that we call Laws of Nature. These are different from the Laws of Logic that define Logical Possibility. The laws of nature, such as, "The speed of light is 186,282 miles per second in a vacuum," is a law that happens to be true in the physical world that we live in. But this law is not logically necessary. The speed of light might have been 186,283 miles per second," or even 50 miles per second. That is, it would not have implied a logical contradiction had it turned out that this was the case.
The laws of nature confine the behavior of matter in our world to a subset of the logically possible worlds. The laws of nature such as the universal law of gravitation, F = MA, and e=mc2 determine the range of what states of affairs are naturally possible. So it is not naturally possible for an unaided human body to fly—the musculature, bone structure, and other physiological traits prevent it. But it is naturally possible (we think) to cure cancer. The laws of nature, which are different from the laws of logic, could have been different without logical contradiction. All natural possibilities are a subset of logical possibilities. That is, anything that is naturally possible is also logically possible, but not everything that is logically possible is naturally possible. Being able to move objects with your thoughts alone through telekinesis is ruled out by physics and not naturally possible, but there is no logical contradiction in the scenario.
The periodic table could have been different, gravity could attract at a different rate, or force could be equal to something different than mass times acceleration. If one of those different sets of natural laws were in place, then the range of what events that is naturally possible would be different. The super powers of comic book heroes, for example, such as teleporting, flying, telekinesis, super speed, super strength, and so on would probably be violations of the laws of nature, but they are not logically impossible. Miracles such as walking on water or bringing the dead back to life are violations of the laws of nature, but they are not logically impossible. Roughly speaking, your rule for figuring out whether something is logically impossible vs. naturally impossible should be this: Ask yourself, "Does this sentence describe a scenario that creates an explicit logical contradiction like a square circle or a married bachelor, or does it merely describe a situation that runs contrary to the laws of physics as we find them in our natural world? If it is the former, then it is a logical contradiction, if it is the latter, then it is a natural impossibility.
Logically possible, but naturally impossible:
An unassisted human body running a 5 minute mile underwater.
An elephant that can do a triple backflip.
A bottle of wine that never runs out.
A human body walking on top of water.
Raising a dead person from the dead with prayer.
Anything that is naturally possible is also logically possible. (But not everything that is logically possible is naturally possible, see above.)
Anything that is logically impossible is also naturally impossible.
Probable and Improbable Whether a claim can be possibly true should not be confused with whether it is probable. We will understand the difference between a claim that is probable rather than improbable as the threshold between when the evidence favors it and when the evidence tilts against it. Leaving aside a more complicated discussion of suspending judgment, let’s think of that line as 50% or Pr(.5). When you toss a coin, and it lands in your hand, you do not have a preponderance of evidence to believe or disbelieve that it has landed on heads. Your evidence that it is heads is exactly 50% for a fair coin. But as your evidence increases either for or against a claim, you should revise it toward believing or disbelieving. If 51% of CSUS students are female, and we pick a random student Smith from a class, then by a very small margin, our evidence favors the conclusion that Smith is female. If 85% of CSUS students drive gas cars, and Smith is a CSUS student, then we have even stronger evidence that Smith drives a gas car. So a claim is probable when your evidence on the whole indicates that there is a 51% or greater chance that it is true, and the claim is improbable when the evidence goes below 50%. We will also understand “most” in sentences like “Most CSUS students are female” to mean that more than 50% are female, and we will treat “most” as equivalent to “probable.” “Possible,” however, means something different, as we saw in the last section. We should keep those distinctions separate as we consider the evidence and whether or not we should believe the conclusion of an argument. If the claim is impossible, either logically or naturally, then we have very strong reasons to believe that it is false. If is probable, then our evidence if favoring believing it.
This tool is designed to help you master the concepts, principles, and ideas in the Chapter 1, The Basics of Argument
When you ask the tool to quiz you, it will quiz you about the concepts in the chapter until you've mastered them. When ou make mistakes, it will explain and help you understand better. These quiz questions are for practice and learning, not for the grade.
The in-person quizzes and exams use the same structure as the practice tool:
same kinds of arguments,
same questions,
same definitions,
same distinctions.
The only difference is that the AI will not be there.
If you have practiced with the tool until the concepts are automatic, the in-person assessments will feel straightforward. If you have not, they will feel confusing and rushed.
Students who use the tool seriously should expect:
higher quiz scores,
more confidence identifying arguments,
fewer “I knew it but couldn’t explain it” moments.
This tool enforces the definitions used in this course. Philosophers can and do disagree about these definitions in other contexts. For this class, you are being graded on whether you can apply these definitions correctly and consistently.