Philosophy Without Reference:
An Introduction to Contemporary Philosophy
By Dustin Faeder
This bookella is intended to serve as a general introduction to philosophy. The title has a dual meaning: not only is it appropriate for those who have no background in philosophy, but it also tries to explain philosophy with as few invocations of past philosophers as possible. Professional philosophers devote much of their writing to interpreting past philosophers, often with reference to other philosophers. Consider, for example, the following sentence: “Just as Karl Marx’s dialectical materialism was an inevitable interpretation of the Hegelian dialectic developed in the Phenomenology of Mind/Spirit, which itself is a foreseeable outgrowth of Immanuel Kant’s Critique of Pure Reason, Arthur Danto’s re-imagining of Hegelian aesthetics in The Philosophical Disenfranchisement of Art constitutes a natural consequence of Immanuel Kant’s Critique of Judgment.” True or not, this sentence requires much prior knowledge. In order to grapple with such a sentence, one must read the referenced philosophers. I assume that the reader has never read a word of philosophy, and thus such references are minimized.
Part of the reason for adopting this method is that I lean toward an ahistorical conception of philosophy. It is my view that most philosophical questions can be posed, and understood at a rudimentary level, without invoking any names. Take, for example, the question of whether Plato’s Theory of Forms is the best way to understand mathematical objects (such as numbers). Rather than naming Plato and describing his Theory of Forms, we can instead think into the issue using neutral, non-referential terms, perhaps with the following series of questions: Are mathematical objects real? Are they physical? Are they part of our world? How do we gain knowledge of them? Invoking Plato’s theory simply is not necessary for understanding these questions, or for answering them.
Philosophical jargon, on the other hand, is unavoidable. It is part of the nature of language that different areas of inquiry require specific vocabularies. We could not, for example, understand biology without the words “cell” or “DNA.” Similarly, we could not understand physics without the words “force,” “speed,” or “mass.” Philosophy, too, has a language that is suited to its pursuit. Where appropriate, I introduce and define the relevant terms. With that in mind, let’s begin by defining the word Philosophy!
What is Philosophy?
Philosophy is often introduced by way of its word roots, or “etymology”. This is such a common method that is difficult to imagine a Philosophy 101 course which does not begin by breaking apart “philosophy” into “philo” and “sophy”, which derive from the Greek words for “love” and “knowledge/wisdom.” Philosophy, then, is “love of knowledge” or “love of wisdom.” But what is the difference between knowledge and wisdom?
As a rough answer, knowledge is more about facts, while wisdom is more about action. We know, for example, that Saturn is the sixth planet from the sun, but it is difficult to reframe such a sentence in terms of wisdom. Being aware that Saturn is the sixth planet from the sun doesn’t make one wise. For most people, knowing this fact doesn’t affect our actions. Indeed, most facts are irrelevant to our actions—consider the spin of a random electron in a star that is 1000+ light years away. Therefore, we can have knowledge without wisdom.
But can we have wisdom without having knowledge? No, we cannot. In a simple sense, wisdom can be considered a form of knowledge, which might be called “know-how.” That isn’t the whole story, however, because we can know how to do things, even if they are things which would be unwise to do. We might know how to jump off a bridge, but that doesn’t mean it would be wise of us to do so. Wisdom, thus, is more than just know-how. Wisdom also requires having some idea of which goals are worth pursuing. And knowing which goals are worth pursuing, combined with knowing how to effectively pursue those goals, requires knowledge of a large set of background facts, such as the facts that gravity exists and falling can kill. The more complex the endeavor, the more background knowledge is required.
Wisdom, then, is knowledge plus an action-guiding principle. This brings us to the field of ethics. Ethics is a major branch of philosophy which tries to understand what constitutes good living. Ethics, more than any other field, focuses on understanding and developing action-guiding principles. Some questions central to ethics are as follows: What constitutes right (or wrong) action? How does one develop a moral (or immoral) character? How can one know what constitutes a good (or bad) life?
That last question is particularly important. Notice that we began by inquiring into knowledge, then turned to wisdom as knowledge plus an action-guiding principle, then ended up back where we started by asking how we can know about that action-guiding principle (i.e. how to lead a good life). This takes us in two directions simultaneously.
First, it provides an opportunity for introducing another major philosophical field, namely epistemology—i.e. the study of knowledge. The two questions at the heart of epistemology are “What is knowledge?” and “How do we obtain knowledge?” The scientific method is a branch of epistemology, since it is one of the many ways we gain knowledge.
Second, it illustrates the interdisciplinary character of philosophy. We did not get very far in our study of ethics before epistemic questions appeared. This is because philosophical questions are often tightly bound up with other philosophical questions. Philosophy is best understood as an integrated network of abstract issues, each bearing on many other issues.
To further illustrate this point, consider the subfield of virtue epistemology, which fuses ethics and epistemology. Virtue epistemology investigates the ethical obligation we have to form correct beliefs. According to virtue epistemology, we have an ethical obligation to develop habits of belief formation that tend to produce true beliefs. In other words, getting the facts right is part of having a virtuous character. And since it is difficult to get the facts right if we do not tend to form beliefs based on defensible epistemic principles, we should try to cultivate a respect for reasons and evidence in forming our beliefs.
If virtue epistemology is correct, then epistemology and ethics are inseparable. This is why “philosophy” is partially derived from a word that is dually translated as both “knowledge” and “wisdom.” Just as epistemology is impoverished by a lack of ethical understanding, ethics is limited by a lack of epistemic comprehension. One can study parts of epistemology that aren’t particularly ethical, and one can study parts of ethics that aren’t particularly epistemic, but considerations from the other field lurk always in the background.
This interdisciplinary character exists throughout philosophy. The realm of philosophy is rich and diverse. Philosophical fields tend to be abstract, which means they deal with general propositions that have broad applications. And it is precisely because these fields are abstract that considerations from one field bear on other fields. It is worth mentioning that the highest degree one receives in Biology, English, Math, Physics, and any number of other fields is a PhD, or Doctor of Philosophy in a specific field (yes, one who earns a PhD in Philosophy has a Doctor of Philosophy in Philosophy). This is because abstract principles tend to apply across the board.
Thus far, we have only defined ethics, epistemology, and their hybrid—virtue epistemology. But that is far from an exhaustive list of philosophical fields: aesthetics, deconstructionism, existentialism, logic, metaethics, metaphysics, philosophy of action, philosophy of language, philosophy of mind, philosophy of religion, philosophy of science, phenomenology, and political philosophy are other major branches. Philosophers with a more historical bent would surely also include as fields in their own right various philosophical periods, such as ancient philosophy, modern philosophy, German idealism, and postmodernism.
Certain people gravitate more toward some areas of philosophy than others. Specialists in ancient philosophy, for example, are unlikely to do much work in deconstructionism. Similarly, aestheticians and phenomenologists tend to focus on different issues than logicians. Philosophy is far too big for any person to fully master. But most of these fields are interrelated in such a way that broad philosophical education enhances global comprehension.
Logic
Logic is, perhaps, the most subject-neutral branch of philosophy. It focuses on correct reasoning. We just defined logic as the philosophical field which focuses on correct reasoning, but that demands clarification. Consider the words “reasonable” and “logical.” The difference between these words is that “reasonable” can include evaluations of an action’s quality, while “logical” is more sterile. Remember Spock, from Star Trek? He was well known for rejecting propositions as being “illogical.” For example, he once stated that it is illogical to sacrifice many lives to save a single life. Technically, he was wrong. While it may be unreasonable to sacrifice many lives to save a single life, there is nothing inherently illogical about it. It is perfectly consistent for you to care so much about one person, for example your spouse or child, that you are willing to sacrifice the lives of many strangers to save your loved one. While reasonableness includes an evaluation of ethical principles at work in one’s actions and judgments, logic focuses solely on the relationships between propositions, such as whether two propositions are contradictory or whether one proposition follows deductively from another.
Consider the statement that it is always wrong to sacrifice many lives to save a single life. If that were true, it would be illogical to deduce that it is sometimes permissible to sacrifice many lives to save a single life. This is because these two propositions contradict one another. Logic shows us that exactly one of these propositions is true. Logic only cares about the fact that they can’t both be true, while which one is true is a matter for reason, generally speaking. Whether it is reasonable to believe that it is always wrong to sacrifice many lives to save a single life depends on many factors, such as the nature of morality, the quality of the life saved, and the qualities of the lives sacrificed.
Logic has both informal and formal elements. Informally, logic is the study of verbal arguments. An argument is a premise, or set of premises, and a conclusion. For example, a person might argue that all humans are mortal, and you are a human, therefore you are mortal. It is a matter of logic that this argument is valid. An argument is valid when the premises necessitate the truth of the conclusion. In other words, an argument is valid when, if the premises were true, the conclusion would also have to be true. It would be equally valid to argue that all humans are immortal, and you are a human, therefore you are immortal. Validity does not require the premises or the conclusion to be true; it merely describes the logical relation between the premises and the conclusion.
Soundness is validity plus truth. If an argument is valid, and its premises are true, then the argument is sound and its conclusion is true. Because all humans are indeed mortal, and you are a human, you are mortal. That is a true conclusion, which we know because our premises are true and the sentences bear the logical relation of validity. Informal logic investigates these types of relationships, with a particular emphasis on various argument forms.
Formal logic is much different. It looks like math. That is no accident, because philosophical formal logic, mathematical logic, and computer science overlap significantly. In essence, formal logic is constituted by a set of systems intended to enable proof or disproof of carefully defined propositions, typically represented as P (or Q, R, etc).
A particularly simple system is known as sentential logic, or the logic of sentences. This system uses only a handful of operators. P and Q can be represented by P & Q. P or Q can be represented by P v Q. P implies Q, or Q follows from P, is represented as P->Q. Not P is ~P. P and Q are only true together, or imply each other, is represented as P<->Q, or as P=Q.
Consider, for example, the following argument: If it is perfect, it is quirky. It is perfect. Therefore, it is quirky. In the formal language, this is represented as P->Q, P, therefore Q. In other words, if P->Q, and you have P, you can deduce Q. It follows necessarily. Formal languages care only about validity, not truth, so for the purposes of formal logic it doesn’t matter whether it’s true that everything perfect is quirky.
Other logical systems, or “logics,” are more complex. The general idea is that you can take verbal arguments, translate them into the syntax of a logical system, and use that system’s rules to evaluate the validity of the argument. At a meta-level, logicians develop and examine different logics for various systemic properties that we want logics to have, such as the tendency to produce answers and for those answers to be correct. For present purposes, however, we need not delve into other logical systems or their meta-features.
But let’s dwell on sentential logic for just one more proof. It is well-recognized that any and every proposition follows from any contradiction. But this is not obvious without a proof. A contradiction, formally speaking, is a proposition and its negation. Contradictions are always false. An example of a contradiction is as follows: “You are human and you are not human.” Now, from any conjunctive proposition (a statement with an “and” in it), you can deduce the component pieces individually. Thus, from our contradiction, you can deduce two sentences: “You are human”; “You are not human.” Our next logical step is the addition of “or anything.” Because you only need one side of a disjunctive proposition (a statement with an “or” in it) to be true, you can validly add “or anything” to any proposition. Here, we can apply this principle as follows: “You are human” implies “You are human or fish are basketballs.” But, since you have already deduced that “You are not human,” you can reach the conclusion that “Fish are basketballs.” Let us lay out this argument more clearly:
(1, Initial Contradiction) You are human and you are not human.
(2, from 1) You are human.
(3, also from 1) You are not human.
(4, from 2) You are human or fish are basketballs.
___________________________________________________________________________
(Conclusion, from 3 and 4) Fish are basketballs.
But this proof only shows that you can deduce “Fish are basketballs” from “You are human and you are not human.” In order to prove the more general statement that any and every proposition follows from any contradiction, we need an abstract logical system. Formally, a contradiction is represented as P & ~P. Thus, we can conduct the proof abstractly as follows:
(1, Initial Contradiction) P & ~P
(2, from 1) P
(3, also from 1) ~P
(4, from 2) P v Q
______________________________________________
(Conclusion, from 3 and 4) Q
The significance of this proof is that it shows us how problematic contradictions are. Since anything and everything follows from any contradiction, then every contradiction implies every proposition, as well as the negation of every proposition, including the negation of the contradiction itself. This should caution us against accepting contradictions as true.
Another way of understanding our result is with reference to the very purpose of philosophy. It is virtually undisputed that part of the natural operation of the human mind is to draw conclusions based on our beliefs. But consider how this applies to contradictions. Since anything and everything follows from every contradiction, what is the likely result for our belief systems if our worldviews are riddled with contradictions? Who knows!? The more contradictions are part of one’s consciousness, the more likely one’s mind is to reach insane conclusions. One of the purposes of philosophy, then, is to root out and eliminate contradictions in our thinking. In other words, the pursuit of philosophy, particularly logic, constitutes a perpetual battle against insanity and stupidity.
Metaphysics
Logic has a tremendous presence in metaphysics, which is one of the largest philosophical fields. Metaphysics is so commonly thought of as being at the core of philosophy that PhD programs often boast a strong “M&E” component (metaphysics and epistemology). If you ask a non-philosopher what metaphysics is, they might reply with a vague reference to astrology, crystals, or religion. While there is a sense in which those are part of metaphysical discussions, that isn’t quite how philosophers use the term.
Metaphysics, being such a broad field, is difficult to define, but is probably best encapsulated as the study of existence. Some questions in metaphysics include: What is the origin of the universe? Does a god (or do gods) exist? Does free will exist? What types of objects exist? What are those objects like?
One of the most influential ways of understanding metaphysics is known as ontology, or the study of being. Most philosophers take ontology to be merely a branch of metaphysics, but this particular branch is such a large portion of metaphysics that it could very easily be understood as the trunk (i.e. the entire tree). A common method of ontology is to try to list the fundamental categories of existence. The potentially fundamental categories tend to include god, matter, mind, time, properties, possibilities, and abstract objects (such as numbers).
Much ink has been spilt debating the proper categorization schema. The majority of ontologists tend to think that there are 3-7 fundamental categories. Many philosophical problems arise in the context of these debates. Physicalists, for example, hold that there is only one fundamental category, namely matter, and that everything else is merely a feature of matter. They tend to argue that minds are identical to brains and that numbers are nothing more than sets of concrete objects. These are attempts to reduce one category to another. Reducibility, defined as the ability to fully explain or understand one category in terms of another category, is a running theme of ontological debate.
Philosophers who, like physicalists, believe everything can be reduced to a single category are called “monists.” Those who believe there are two fundamental categories are called “dualists.” One of the most longstanding ontological debates concerns whether mind and matter are one and the same. The question of how consciousness can be a product of neural activity is known as “the hard problem.” Properly speaking, this is a question in philosophy of mind, but it has ontological overlap.
Personally, I think philosophers should use the terms “tryptist,” “quadrist,” “quintist,” etc. for ontological views that there are, respectively, 3, 4, 5, etc. fundamental categories. Using that terminology, I am an ontological tryptist. I believe that matter/time is a joint fundamental category because Einstein’s theory of relativity teaches us that space-time is a relativistic continuum. I believe that mind is not reducible to matter, so is its own category. Finally, I believe that possibilities/abstract objects are their own joint category.
My reason for believing this is because quintessential abstract objects, including mathematical propositions such as 1+1=2 and facts about analyticity, seem definable in terms of possibilities. It is necessary that 1+1=2. In other words, it is not possible for 1+1 not to equal 2. Similarly, it is not possible for contradictions to be true, and it is not possible for tautologies to be false. Because I do not know how to understand abstract objects apart from some notion of possibility, I consider them to be a joint category.
This makes me an ontological tryptist who believes that the fundamental categories of existence are mind, matter/time, and possibilities/abstract objects. I don’t consider properties to be a separate category because they are part of the entities of which they are a property. I’m no expert in ontology and my view is open to revision, depending on what arguments are presented, but this is an example of ontological (and, hence, metaphysical) thinking in action.
Some metaphysicians distinguish between ontology and metaphysics as a whole by contrasting ontology with cosmology. Cosmology is the study of the origin of the universe. But if ontology includes god, matter, and time, it is difficult to see how ontology doesn’t also include cosmology. Under some understandings of ontology, it is identical with metaphysics.
Another topic in metaphysics that might not fit squarely within ontology is free will. It might be conceived of as a property of minds, or perhaps of physical bodies. Physicalists sometimes even deny that free will exists! They say, for example, that everything is physical, and all physical objects are governed by the laws of physics, so everything that happens is already determined to occur, including human actions. This view is called determinism.
A cluster of views here are worth identifying. First, determinism is often broken up into “hard” and “soft.” Hard determinism is the view just described, namely that everything is determined and if everything is determined we can’t have free will, so we don’t have free will. Soft determinism agrees that the world is merely one big, complex chain of causes and effects, but holds that we have free will nonetheless.
Put differently, one can question whether determinism is compatible with free will. Some philosophers, called incompatibilists, believe that if determinism is true, we can’t have free will. Thus, hard determinists are incompatibilists who believe that determinism is true. Other philosophers, called compatibilists, believe that we can have free will even if determinism is true. Thus, soft determinists are compatibilists who believe in determinism.
The primary reason determinists tend to be determinists is that they have a great respect for science. Physics, in particular, has enabled so many technological advances that determinists often feel strongly bound by the laws of physics. Free will, in their view, would constitute an uncaused cause, which simply can’t be true if we take physics seriously. But this position has been undermined by physics itself, since our best physics has uncovered quantum randomness. And if randomness is a feature of the universe, then uncaused events occur, which disproves determinism. It is still a mystery, though, how quantum randomness might fit into an explanation of free will, because randomness is not identical with choice.
Philosophy of Religion
In many people’s minds, free will is linked to the existence of a god. Therefore, it is natural for us to transition to philosophy of religion, which some consider to be a branch of metaphysics. To begin, it is worth distinguishing between a few different ways of approaching these issues. Philosophy of religion, religious studies, and theology are distinct enterprises. Theology proceeds from the perspective of belief in god(s), inquiring into the nature of god(s), including man’s relation to god(s). The field of religious studies is essentially an anthropological or sociological endeavor that seeks to understand the religious belief systems of various cultures. Philosophy of religion does not assume the existence of god(s), but rather examines arguments for and against the existence of god(s). Philosophy of religion sometimes overlaps with religious studies and philosophy of mind in theorizing about the reasons why cultures hold their religious beliefs. More generally, philosophy of religion sometimes investigates the socio-psychological functions of religion, such as communal bonding and ethical guidance.
Probably the best argument for the existence of god is known as the ontological argument. This argument, simplified, goes as follows:
(1) God is perfect.
(2) Existence is valuable.
(Sub-conclusion) Therefore, the concept of god entails existence.
(Conclusion) Thus, if you have a concept of god, you must believe in god.
This argument is difficult to process. It raises questions about the natures of beliefs, concepts, existence, perfection and value that are not easily understood. I do not pretend to give the ontological argument a thorough treatment here. I do, however, think that this argument forces one to think carefully about the attributes or properties of god, which leads directly into the best argument against the existence of god, namely the argument from evil:
(1) God is omnipotent (all-powerful).
(2) God is omniscient (all-knowing).
(3) God is omnibenevolent (all-good).
(Sub-conclusion) Therefore, god would prevent bad things.
(4) But bad things happen.
(Conclusion) Thus, god does not exist.
The argument from evil more carefully specifies god’s perfection by analyzing it (i.e. breaking it apart) into several distinct positive attributes. The argument holds that if god knows about all evil, has the power to stop all evil, and is well-intentioned, then god would stop all evil. But since evil exists, god does not exist (or, minimally, does not have all three attributes).
A typical response to the argument from evil is that god gave humans free will, which is more valuable than the bad things that result from humans having free will, which explains why god allows bad things to happen. This response fails, however, because it does not explain the harm caused by natural disasters such as earthquakes, fires, floods, tornados, or volcanic eruptions. Such events are usually not a product of human free will.
Religious folk sometimes try to argue that this does not disprove the existence of god because it is somehow all part of god’s plan and is for the best. This position quickly becomes ridiculous, however, as you begin to detail the horror and devastation of each and every natural disaster ever to have occurred. Was it really for the best that Pompeii was wiped out by Vesuvius? Was the 2004 Tsunami in the Indian Ocean, which killed hundreds of thousands of people, really for the best? What about individuals who were severely injured in natural disasters but lived for weeks in agonizing pain before finally succumbing to their injuries? Was it best that they felt as much pain as they did while they died? Affirmative answers to such questions are absurd and should be rejected. Indeed, one wonders what to make of the view that “everything is for the best.” On its face, it would seem to eradicate any distinction between good and bad, which makes ethics a difficult endeavor indeed.
The argument from evil is powerful and seems to show that no being, including god, possesses omnipotence, omniscience, and omnibenevolence. There are other problems with this definition as well, such as the question of whether an omnipotent being can make a stone that it can’t lift. The idea of omnipotence might be confused. But if this is not an adequate concept of god, then what is? And if we do not have a workable concept of god, are we even capable of believing in god? Generally speaking, a concept is required for belief. What if I told you that I own a bemura? Would you believe me? Would you even be capable of believing me unless I explained to you what a bemura is, or at least what I mean by “bemura?” Of course not. In a sense, the degree to which we have a developed concept is the degree to which we have the ability to believe in that concept. Religious folk sometimes offer the inconceivability of god as a solution to this problem, without realizing that it destroys their ability to believe.
Another argument for god’s existence is that everything has a beginning, so the universe must also have a beginning. This is a cosmological argument. Religious folk often reject the big bang theory, which holds that the universe began as a compressed cluster of matter that exploded, sending in all directions particles which eventually formed into galaxies, solar systems, and planets. The big bang theory is our best scientific theory of the universe’s origin. But religious folk tend to be dissatisfied with the big bang because it doesn’t answer the question of where matter initially came from, so they posit the existence of god to fill in the missing origin. They often think of god as an alternative to the big bang, but god is compatible with the big bang theory because god could have created the universe via the big bang.
Unfortunately, positing god as an origin doesn’t work because it doesn’t explain god’s origin. It replaces one mystery with another. Saying that god always existed doesn’t help, for two reasons. First, if god always existed, why couldn’t the universe always have existed? Second, our best physics tells us that matter, space and time are inextricably interconnected, so it is nonsensical to speak of anything as having existed before the universe. Some theorists have described god as existing “outside of time”, but I highly doubt we can conceive of that. It is similar to stating that “it is noon on the sun.” Throwing conceptual confusion on top of an unanswerable question doesn’t help clarify anything.
Even if we have enough of a concept of god to make religious belief possible, and even if we assume that god exists, there are still many questions to answer. Religious folk often put god forward as the source of free will, morality, and eternal life. But why must god have any of these features? Couldn’t god simply exist, on god’s own terms, separate from us and not concerned with us? The existence of god doesn’t necessarily mean that god created us or that we have free will. God might not care how we act, might not make sure people receive their just desserts, and might not provide for the survival of consciousness after death. In other words, god might see us as we see ants or dust mites. Indeed, if god were as perfect as religious folk tend to think, it would be mysterious why god would trouble with humans at all.
Religious folks might, at this point, have the response that I am leaving out the central religious concept: faith. I have not yet used the word “faith,” choosing instead to write about “religious belief.” What is the difference? The first thing to note is that not all religious beliefs are instances of faith. For example, the belief that Judaism entails certain dietary restrictions is a religious belief (i.e. a belief about religion), but is a simple sociological fact. The belief that Jesus lived is also typically not an instance of faith because various historical artifacts and documents confirm his life. Finally, the belief that the existence of god is consistent with the big bang is not an instance of faith because, as we have already discussed, god could have caused the big bang. The compatibility of god and the big bang is a simple logical truth.
Moreover, people can have faith about any number of non-religious topics. One can, for example, have faith that a family member will survive a dangerous surgery with a 50/50 survival rate, or that a friend’s job application will be accepted even though they are not the best candidate. And one can have faith that one will win the lottery, despite one-in-a-million odds. Whether a belief counts as faith, then, depends not on subject matter, but on reasons.
To provide a clear definition, then, faith is belief without sufficient reason. When you go outside and see that it is raining, you have sufficient reason to believe that it is raining. You therefore do not have faith that it is raining. Similarly, you do not have faith that the sun will rise tomorrow. After you have lived long enough, with the sun rising every day, you have sufficient reason to believe that the sun will rise the following day. In order for belief in a proposition to count as faith, one must have, on balance, no significant reason to believe that the proposition is true rather than false. This topic contains much more nuance, which can only be appreciated by delving deeper into epistemology.
Epistemology
Earlier, we defined epistemology as the study of knowledge. But how does one study knowledge? One does not simply go outside, find a piece of knowledge lying on the ground, and take it back to the lab for dissection. The best place to start is with Descartes. Yes, I know this is a betrayal of my commitment to introduce philosophy without reference, but in this lone instance it simply cannot be helped. Descartes is an absolutely crucial focal point for epistemology. I guarantee you that, if you ever take an epistemology course, you will study Descartes. Fortunately, you have probably already heard his famous insight, which is the most well-known philosophical phrase ever: “I think, therefore I am.”
It seems so simple, but what does it mean? The key to understanding this phrase, known as The Cogito (from the Latin, “Cogito, ergo sum.”) is to learn that Descartes was trying to find knowledge. He wanted to identify the facts about which he could not be wrong, so he could then use those facts as a firm foundation for building a correct worldview. As a method toward this end, Descartes engaged in what is known as methodological doubt.
Descartes decided to temporarily set aside all beliefs about which he could be mistaken. This included beliefs based in perception: anything Descartes saw, smelled, heard, tasted, or felt could be misperceived, so he set those categories to the side. That included all memories of sensory experiences. Descartes also set aside mathematical truths such as 1+1=2 because he could be mistaken about those. Have you ever solved a math problem, with a feeling of great satisfaction, only to discover that you accidentally made an error and got the wrong answer? Descartes reasoned that this could be the case for all of math, so to be absolutely, unquestionably certain that he was identifying knowledge, he jettisoned mathematics.
The one thing that Descartes was unable to doubt, however, was his own existence. In order to doubt, one must exist. Therefore, it is impossible to doubt one’s own existence. More generally, he realized, one cannot think without existing. The existence of any mental state necessitates the existence of a mind having that mental state. Thus, the single bedrock fact that Descartes discovered, upon which he attempted to build a worldview, was his existence.
Students in epistemology courses are taught Descartes as an example of radical skepticism. Radical skepticism is the view that the only thing one can know is one’s own existence. Students studying radical skepticism often reply to any affirmative statement you make with “but how do you KNOW?” When taught incorrectly, radical skepticism can lead people to believe that we have very little knowledge indeed. But radical skepticism is not a livable worldview, for without knowledge of our physical surroundings we would quickly die.
If radical skepticism is obviously false, then why is it so ubiquitously taught? It is decidedly not because we can derive an entire worldview from the bare fact of our existence. Rather it is because of what we learn from methodological doubt. We learn, for example, that if we are to use our knowledge to guide our actions, our knowledge might be false. In other words, some degree of certainty, infallibility, or indubitability less than 100% must be good enough to count as knowledge. The perhaps unsettling truth Descartes teaches us is not that we have no knowledge. It is not even that some of the knowledge we think we have is not knowledge. What we learn from Descartes is that even though some of our knowledge is false, it is knowledge nonetheless. Without explicitly saying so, Descartes opened the oxymoronic can of worms labeled “false knowledge.”
Another way to think about Descartes is to question the appropriateness of skepticism. Skepticism need not be radical. Skepticism comes in degrees and, sometimes, is localized to specific areas of inquiry rather than being global. It is perfectly reasonable, for example, to believe that physics mostly has things right, while astrology mostly has things wrong. Skepticism toward physics is much less defensible than skepticism about astrology, given the success of physics in enabling us to explain and control our environment versus the, well, complete and total failure of astrology to produce any useful or verifiable results.
The task of epistemology, then, can be understood as the development of criteria for belief formation which optimize our flourishing. We don’t want our epistemic standards to be so strict that we only have knowledge of our own existence, otherwise we wouldn’t have enough knowledge to survive. But we also don’t want our epistemic standards to be so lax that any belief counts as knowledge, for that would also jeopardize our survival. The trick is to find the right balance between these extremes—some way to distinguish between those beliefs which should guide our actions and those which should be discarded.
One popular analysis of knowledge defines it as “justified true belief.” As you might imagine, each of these terms has been closely scrutinized and has connections to other areas of philosophy. “Justification” has a second home in ethics, “truth” is also a topic in metaphysics, “belief” is extensively treated in philosophy of mind, and all of them are subject to considerations from philosophy of language. The more one digs into these fields, the more one comes to comprehend what is meant by “justified true belief,” which has proven to be an effective, if rough, heuristic for identifying knowledge. And the more one comes to understand the heuristic, the better one can apply it, which means one incrementally improves one’s habits of thought. In short, digging deeply into philosophy is an effective way to become smarter.
Briefly put, to have a belief is to accept a proposition as true. This entails rejecting the opposite of the proposition. For example, if you believe that doctors can effectively treat ear infections, then you do not believe it would be useless for people to go to a doctor when they have an ear infection. This leads into a fairly common theory of belief, namely the dispositional theory of belief. Under this theory, belief in a proposition is constituted by one’s dispositions to act in certain ways under certain circumstances. So, for example, your belief that doctors can effectively treat ear infections would be constituted by the increased likelihood that you will go to a doctor when you have an ear infection, or that you will take your children to a doctor when they have an ear infection. This respects the fact that beliefs have an action-guiding function.
Most epistemologists agree that you cannot have knowledge of a proposition that you do not believe. We have already seen that this is more complex than it might at first seem: you must have to have the requisite concepts to understand a proposition before you are able to believe the proposition. Moreover, since concepts can be more or less developed, belief also comes in degrees. The degree of belief one has, therefore, can be roughly understood in terms of the degrees of comprehension and confidence one has in a proposition. And, as we just discussed, this can be reframed in terms of dispositions that are more or less developed. The exact strength of belief one must have in order to make knowledge possible is difficult to pinpoint, but for our purposes it should suffice to say that the belief must be strong enough that one tends to rely upon the belief and not the logical opposite (i.e. negation) of the belief.
Truth is similarly difficult to define. In its simplest and most obvious sense, a proposition is true when it fits reality. This is known as the correspondence theory of truth. Consider the following proposition: “you are reading this book right now.” That statement is true because you are reading this book. This correspondence between the proposition and reality makes the proposition true. It is worth mentioning that not all propositions are capable of being true. A classic example is a man who is right on the borderline between bald and not bald. How many hairs must he lose to qualify as bald? How many hairs would he have to grow to qualify as not bald? The answer isn’t clear. Arguably, he is neither bald nor not bald, but rather “balding.” The existence of vagueness limits the applicability of truth as a concept. To use a term of art, where vagueness exists a proposition may fail to be truth-functional. Just as with belief, sometimes truth comes in degrees: some propositions can be more or less true.
But even though some propositions may not be truth-functional, many propositions are perfectly capable of being true or false. For those propositions, a rough correspondence theory is good enough for present purposes. It makes intuitive sense that if one has a belief, and that belief does not match the world, then that belief is false and therefore does not count as knowledge. It is, therefore, reasonable to believe that truth is a requirement of knowledge.
True belief, however, is not sufficient for knowledge. This is because people can take wild guesses, for no reason, which turn out to be correct. We would hardly call such beliefs knowledge, despite their truth. Consider, for example, fans of opposing sports teams that are evenly well-matched. As the game approaches, a fan of one team proclaims his confidence that his team will win. A fan of the other team, in turn, proclaims his confidence that his team will win. Suppose that the game is close but a winner is decided. Does this mean that the fan of the winning team “knew” that his team would win? Of course not. The fan whose evenly-matched team won had no justification for claiming that his team would win. At best the odds were 50/50, and worse if a draw is possible.
This point is even easier to see if we consider a person who predicts an extremely unlikely event. Consider, for example, the worst team in the league playing the best team in the league, with similarly boasting fans. Expert odds-makers give the worse team a 1% chance of winning. When they do win, is it thereby proven that the fan of the worse team knew that his team would win? Again, the answer is no, and even more emphatically. The less reason we have to believe that a proposition is true, the less justification we have for believing the proposition. Under the “justified true belief” definition of knowledge, then, the less reason we have for believing a proposition, the less likely we are to know it.
What we have just seen is that the “justified true belief” definition of knowledge has several moving parts. Arguably, all three of its component parts can be fulfilled in degrees, which makes it difficult to determine when the definition as a whole is fulfilled. Thus, what may at first have seemed like a promising definition of knowledge turns out to be far from clear. Significantly, philosophers in the latter part of the 20th century developed critiques of this definition. In particular, it was recognized that many beliefs fit this definition’s criteria, but don’t count as knowledge. This class of beliefs relies on the fact that the reason we might have to form a belief is not necessarily related to the facts which make the belief true.
The easiest way to see this point is through the disjunctive connector “or.” Consider the statement “the Earth is round.” It follows from this statement that “the Earth is round OR you are 100 feet tall.” As we saw in the section on logic, this is because only one side of a disjunctive statement needs to be true, in order for the entire disjunct to be true. And since the Earth is indeed round, it is also true that “the Earth is round OR you are 100 feet tall.”
This matters because your beliefs are sometimes false, despite your having justification for believing them. Those beliefs entail further beliefs with the following structure “this justified but false belief is true OR anything else is true.” Implicitly, then, you have a huge class of disjunctive beliefs that includes all combinations of your justified yet false beliefs and all other propositions. And since there are facts which you have no reason to believe, but are nonetheless true, you have a large class of justified true beliefs which are not knowledge.
The reason this class of beliefs does not count as knowledge is that your justification for believing the disjunctive proposition comes through one side of the disjunct, while what makes the disjunctive proposition true comes through the other side, which you have no justification for believing. Basically, the justification and the truth-maker don’t line up. To extend the sports example, this makes your beliefs in this class of disjuncts just as randomly true as the sports fan who believes the worst team in the league will beat the best team in the league, and whose belief ends up being true, despite very low odds.
Consider the fan of the best team in the league, which sports analysts give a 99% chance of winning. When his team loses, it turns out that his belief that his team would win was false. It was a justified yet false belief. That belief entailed all disjunctive combinations with any and all propositions he had no reason to believe, but happened to be true. While all those disjunctive beliefs were justified and true, they did not count as knowledge because his only justification for believing them was that his team was favored to win, when in fact it lost.
The upshot of this rather convoluted argument is that the “justified true belief” definition of knowledge has widely recognized problems. This leaves epistemology in the difficult position of not quite knowing what knowledge is. But just because we don’t have a perfect definition of knowledge doesn’t mean we have no idea whatsoever about knowledge. Despite the problems we have described, it still seems that justification, truth, and belief are requirements for knowledge. It is difficult to imagine knowing a proposition if we don’t at all believe it, if it isn’t at all true, or if we have zero justification for believing it. Justification, truth, and belief may still be necessary, even if they aren’t sufficient.
Faith
Earlier, in our philosophy of religion discussion, we identified faith as an epistemic position rather than tied to any particular subject matter. Having discussed epistemology, we are now in a position to better understand faith.
In our discussion of epistemology, we saw that, although the “justified true belief” definition of knowledge is incomplete, it is nonetheless useful. Consider, for example, faith in the existence of god. By definition, this means a person believes in god without having sufficient reason to believe in god. Being charitable, let us assume that god exists. That means two prongs of our heuristic are met, namely truth and belief. But that still falls short of being knowledge because such a person would not have justification for their belief. Generally speaking, faith in a proposition is incompatible with knowledge of that proposition. In other words, if your belief is an instance of faith, it is not knowledge, and if your belief counts as knowledge, it is not an instance of faith.
Faith, then, is an exercise in irrationality. To support faith as a method of belief formation is to reject epistemic standards. The primary thrust of the entire field of epistemology is that our beliefs should be based on reasons, while faith is belief in the absence of reasons. Just as studying epistemology makes one smarter, accepting faith as a method of belief formation makes one dumber.
Faith also contains ethical problems. Consider again virtue epistemology, which holds that we have an ethical obligation to develop habits of belief formation which tend to result in true beliefs. One reason to believe that it is a virtue to try to form true beliefs is because false beliefs often have pernicious effects. Children regularly die because their parents believe prayer, rather than doctors, will heal their children, or because parents believe that vaccines will harm rather than help their children. And many atrocities have been committed in the name of faith-based beliefs. Suicide bombers who kill dozens or hundreds of people are motivated by their faith. The crusades and the Spanish inquisition were motivated by faith.
Insofar as respect for evidence and reason help us form true beliefs, affirmations of faith destroy our ability to grasp reality. Religious folk tend to argue, however, that religion is different. They sometimes say that yes, in most realms of belief, it is best to base our beliefs on reasons, but that religious beliefs are the lone category where faith is appropriate. Since we cannot know whether god exists (or gods exist), we are free to believe anything we want.
While this argument helps explain why faith is often misinterpreted as religious belief, rather than an epistemic position, the argument is difficult to understand. It is not enough merely to claim that religion is special. Rather, one must provide some reason to believe that religion is special in such a way that ordinary epistemic principles should be suspended. Faith is not self-justifying, even if it can somehow be justified. But it is hard to imagine a justification.
On the contrary, there are reasons for believing that no field should be immune to epistemic considerations. Neuroscience, for example, has bearing on this issue in multiple ways. Our best neuroscience tells us that consciousness is a function or product of our neural activity. This is difficult to deny because there are myriad cases where a person has sustained a brain injury and, as a result, suffers from altered consciousness. Significant progress has been made in mapping our neural architecture, such that neuroscientists can predict with a high degree of success which deficits will result from injuries to specific parts of the brain. Certain regions of the brain control vision, hearing, emotional regulation, and other cognitive functions.
Religions, however, often include a belief in the survival of consciousness after death. Sometimes this is described with the word “soul,” although it is difficult to understand what that word means. It seems to mean something like character, mind, or personality, but in an immutable and indestructible sense. Unfortunately for religions, it is difficult to reconcile this with neuroscience. At what point in one’s consciousness are the contents of one’s soul determined? At conception, when we are merely a tiny clump of cells? At birth, before we have formed any significant knowledge, linguistic capacity, or memories? When our brains finish forming, somewhere in our early-to-mid 20’s? And what happens to the soul of a person, say a 30 year old, whose frontal lobe is damaged, significantly impairing cognition and emotional regulation? Is the victim of such an injury stuck with their severely impaired consciousness even after death, or do they regain their ordinary mental functioning once their brain dies and their soul takes over? Insofar as religious folk have faith that consciousness survives death, they must reject highly reliable science.
As we have already seen, however, the existence of god (or of gods) is compatible with the cessation of consciousness at death. There is no necessary connection between belief in god(s) and belief in souls. And this, religious folk might argue, means that the argument from neuroscience still allows them to believe in god(s), perhaps as an origin story or as a source of morality, without offending general epistemic principles.
But neuroscience also has a response to this argument. The more we study the brain, the more it becomes clear that the brain is a highly integrated network. Although some parts of the brain are responsible for specialized functions such as sight or hearing, more general features of consciousness, such as beliefs and memories, are not as easy to locate. It is not as though our memories from when we were 5 are stored in the 5th millimeter back from the front of our brains, our memories from when we were 6 are stored in the 6th millimeter back from the front of our brains, etc. Beliefs and memories are more diffuse. Our worldviews, in a sense, develop holistically rather than in a compartmentalized manner. Because our beliefs, memories and worldviews are more distributed and integrated than some other cognitive functions, they are difficult to contain in the sense religious folk presume is possible. To make the point more specifically, neuroscience gives us good reason to believe that a rejection of epistemic principles in one cognitive realm will also affect other cognitive realms.
The entire point of epistemic standards, such as a respect for reasons, is that they are supposed to help us distinguish between true and false beliefs, in all realms of knowledge. Admittedly, there is some sense to the position that our epistemic standards must vary depending on the realm of inquiry. The research techniques applied in the sciences differ from those of the humanities. But no discipline wholly rejects the role of reasons in the way that religious folk attempt to do when they support faith as a valid method of belief formation. To reject epistemic standards in any field, including religion, is to delegitimize epistemic standards as a whole. In other words, faith in religion, just as it would be in any other field, is a form of irrationality. Indeed, it is a form of insanity.
This is not to say that religion is all hogwash and can never be worthwhile. I am not opposed to religious belief. I am merely arguing that religion cannot abandon epistemic principles and maintain any semblance of reasonableness. Religious beliefs, like any other sort of belief, should be supported by evidence, which can come from direct personal observation, others’ statements, experimentation, or the exercise of our rational faculties. It may very well be that some people have religious knowledge, even knowledge of god. I do not reject the possibility of religious knowledge, just as I do not reject the existence of knowledge, generally speaking. Faith, however, is an epistemic nightmare.
Modality
Modality, also known as modal logic, is the logic of possibility and necessity. Now that we have studied some basic logic, metaphysics, and epistemology, we are in a position to think more in-depth about possibility and necessity. You may recall that, back in the metaphysics section, I grouped abstract objects and possibilities into the same ontological category. My reasoning was that they are conceptually intertwined in a very deep sense.
Contradictions are always false. Tautologies are always true. Aside from mathematical propositions, examples of tautologies are “all bachelors are unmarried” and “murder is always wrong.” By definition, bachelors are unmarried males. Similarly, murder is typically understood as wrongful killing. But some propositions are neither contradictions nor tautologies. These are known as contingent propositions. They might be true or false, depending on how the world is, although the world might have been different.
The reigning theory of possibility and necessity is known as possible worlds theory. Under this theory, our world (the entire universe, not merely the Earth) is but one of many possible worlds. Our world is known as the actual world, but non-actual worlds also exist; existence is broader than actuality. According to possible worlds theory, for somebody to say that they “could have been a basketball star” is to state that there is a possible world just like our world, except in that world they are a basketball star.
Contradictions, because they are always false, are true in no possible world; it is impossible for contradictions to be true. In the most general sense of possibility, then, contradictions constitute the entire scope of the impossible, which is really just a limiting factor on which worlds are possible. Tautologies, on the other hand, are true in all possible worlds; it is impossible for necessary truths to be false. Thus, mathematics and formal logic are true in all possible worlds. Contingent propositions are true in some possible worlds and false in others.
This view of possibility is its most general sense. It is sometimes referred to as “metaphysical possibility.” One more restricted sense of possibility is “physical possibility.” The difference is that metaphysical possibility recognizes that the laws of physics could have been different, while physical possibility operates within the structure of physical laws. Yet another restricted sense of possibility is “epistemic possibility.” While it is metaphysically possible for us to know exactly how many fish live in the sea, it is beyond our ability to measure. Practically speaking, knowledge of how many fish live in the sea is epistemically impossible. A related sense of possibility is “human possibility” or “cognitive possibility.” While omniscience is metaphysically possible, it is beyond the capacity of any human mind (and, arguably, any mind).
Although there are many senses of possibility, philosophers tend to use the most general sense. Thus, when philosophers say that something is a possibility, they mean that it is metaphysically possible. Part of the reason for this convention is that a statement can be considered true if it is true in any of its senses. Sometimes, philosophers claim that anything conceivable is possible, with the caveat that contradictions are inconceivable. While we can recognize contradictions when we see them, it is literally incoherent to believe that a proposition and its opposite are both true. But the conceivability definition of possibility has the practical problem that human imagination has limits. Just as we cannot know everything, we simply don’t have the brainpower to imagine all possible worlds in every minute detail. So conceivability is smaller than possibility. Regardless, much of contemporary philosophy utilizes the possible worlds framework to analyze and develop other areas of thought.
Ethics
Earlier, we discussed ethics as the study of how to live well. Through virtue epistemology, ethics played a role in our discussions of philosophy of religion, epistemology, and faith. But merely getting the facts right is not the full scope of ethics. Ethics can never fully disconnect from epistemology, because we can always ask how we know a given ethical proposition. But, for ethics, epistemic concerns exist more as background considerations.
Ethics exists at many levels. The most obvious level is that of concrete action. Suppose, for example, that you are babysitting a small child. You could easily kill that child in a number of ways. But should you? Would it be ethical to do so? The answer is obviously no. Anybody with any moral sense recognizes that it is immoral to kill small children for no reason.
But not every moral question is that easy. Suppose that you are walking along a river and see somebody drowning. Is it ethical to simply keep walking? Do you have an ethical obligation to jump in the river and save the person? That would seem to depend on a number of factors, such as how big the river is, how fast it is moving, how well you can swim, the condition of your health, etc. It is certainly morally permissible to save somebody who is drowning, but it might not be morally obligatory. You could, however, have a moral obligation to call for help if you have access to a telephone, even if you do not have a moral obligation to physically save the person yourself. There are many, many borderline cases.
For borderline cases, how are we to discern what is ethical? This brings us to a second level of ethics—the theoretical. Many ethical theories have been put forward, purporting to give us a method for determining which actions are morally obligatory, morally permissible but not obligatory, or morally forbidden. Good theories provide ethical answers in hard cases.
We have already seen the application of an ethical theory in virtue epistemology. This is a specific implication of virtue theory, which is a general ethical theory that focuses on developing a virtuous character. Well-developed virtue theories describe various character traits, distinguishing between those that are virtuous (such as a tendency to form beliefs based on reasons) and those that are problematic (such as the affirmation of faith). And although virtue theory focuses less on particular actions than other ethical theories, it is reasonable for a virtue theorist to argue that, for any given choice, we are obligated to act in the way that will most effectively habituate us to have virtuous characters.
Another type of ethical theory is consequentialism, so-named because it cares about the consequences of our actions. According to consequentialism, an action is right if it brings about the best consequences. Utilitarianism is the most popular form of consequentialism. According to utilitarianism, we have a moral obligation to act in such a way that the most happiness is produced. Suppose you have to make a choice between two options. Most likely, one of those options will produce the most happiness, all things considered. This is because happiness comes in degree, so the chance of two different choices producing exactly the same amount of happiness is infinitesimal. But suppose you are facing one of those rare situations in which both options would produce the same amount of happiness. According to utilitarianism, it doesn’t matter which one you choose. Either is permissible and neither is forbidden.
Unfortunately, utilitarianism has well-recognized problems. First, it is often difficult to predict all the effects of your actions. We are not omniscient, particularly about the future. Some effects simply cannot be known, while others would take so much time to fully research that doing so would prevent us from fulfilling other moral obligations. According to utilitarianism, then, in practice we cannot avoid moral guesswork, which is what we were trying to avoid by developing a moral theory in the first place.
Second, utilitarianism makes it difficult to account for our obligations to family and friends. If the only thing that matters is happiness, then it does not matter who or what is experiencing that happiness. We have the exact same obligation to make sure our neighbor is happy as we do our spouse. But this does not seem to match our ordinary moral intuitions because we tend to believe that we have special obligations to those who are close to us.
Third, utilitarianism is very demanding. Arguably, many people can produce the most happiness by giving all of their belongings to charity. Many parts of the world experience famine. Under utilitarianism, then, it would seem that we are morally forbidden from all sorts of ordinary entertainment, such as vacations or even going to the movies, because the money required for such trips would produce more happiness if it were used to feed starving children in a foreign country. But, again, this does not seem to match our ordinary moral intuitions because we typically don’t fault people for taking vacations or going to the movies.
Finally, utilitarianism has multiple interpretations. Act utilitarianism requires us to examine the amount of happiness that would be produced by every specific act we might take. Partly because of the epistemic problems involved in such an endeavor, a second interpretation has been developed, called rule utilitarianism. Rule utilitarianism shortcuts the analysis by requiring that we act in certain types of ways in certain types of situations. In other words, it gives us generally applicable rules of thumb, such as the military directive to always follow the orders of one’s commanding officers. The problem with rule utilitarianism, however, is that sometimes it will be obvious that following the rule will cause more harm than good. The difficulty in applying act utilitarianism pushes us toward rule utilitarianism, while the fact that rules have exceptions pushes us back toward act utilitarianism. They collapse into each other.
A third type of ethical theory is known as deontology. Deontological theories focus not on consequences, but on clearly defined moral duties. One example of a deontological position is that humans should never lie. As some deontologists hold, lying is wrong, regardless of the consequences. Wrongness is a property of the action of lying, just as wrongness is a quality of murder. The Ten Commandments are a good example of a deontological worldview. Deontologists don’t just provide lists of morals. That, after all, wouldn’t quite take us from first-level moral choice into the second-level realm of theory. On the contrary, sophisticated deontologists try to provide an algorithm, or system, for determining what our duties are. One popular view urges us to consider what the world would be like if everyone were to act in the way we are considering acting. This is similar to following the Golden Rule, namely that you should act as you would want others to act. This is not necessarily because of the consequences of your actions; rather, it is a hypothetical universalization.
The primary problem with deontological theories is that, if we completely remove consequences from the equation, it is difficult to imagine why we would have moral principles. Morality, after all, is supposed to help us live good lives. It is tempting to understand deontological claims as a strict species of rule utilitarianism which does not allow exceptions, although deontologists reject this interpretation. Deontologists view consequentialists as principle-compromisers, while consequentialists see deontologists as foot-stampers.
Other ethical theories exist, but these are the most popular theories. They should suffice to illustrate what it is to have an ethical theory. Ethical theories are general sets of propositions that attempt not only to explain our ethical intuitions, but also to guide us in situations where our ethical intuitions don’t quite guide us.
This brings us to the third level of ethics, namely metaethics. In one sense, metaethics is merely a higher level of ethical abstraction. Whereas ethical theories purport to abstractly describe and systematize our ethical intuitions and judgments, metaethics analyzes and critiques the very project of ethical theory-building. Some believe that metaethics is part and parcel of ethical theorizing, while others believe that something distinct occurs in metaethics that doesn’t occur at the level of specific ethical theories. Metaethics is a more abstract and integrated field than ethical theory. In particular, metaethics draws from metaphysics and philosophy of language to pose the questions of what do we mean, or perhaps what are we doing, when we use evaluative terms such as “right,” “wrong,” “moral,” and “immoral.”
Myriad metaethical views exist. Different metaethical theories take different stances toward the nature of ethical truth. Moral realism is the most straightforward. It holds that there are indeed moral truths. Most versions of moral realism allow for the possibility of universal moral truths. Moral particularism, however, is a version of moral realism that rejects ethical abstraction because, while there are ethical truths, they are so situation specific that ethical theorizing is impossible. Cultural relativism holds that moral truths vary depending on one’s culture. Moral nihilism holds that ethical statements are not truth-functional. Expressivism, which can be considered a form of moral nihilism, holds that when we use ethical terms we only express personal judgments. Emotivism, a form of expressivism, holds that the judgments we convey when we use ethical terms are mere expressions of emotion. Metaethics is difficult. Understanding metaethics requires developing a host of concepts, particularly insights from philosophy of language. Because metaethics is so integrated, it requires further discussion in other areas before we are in a position to evaluate metaethical views.
Axiology
As we have seen, philosophy is a highly integrated subject. One way to understand ethics and metaethics is by broadening the scope of our inquiry. This allows us to analogize from one field to another, and to identify principles that might remain hidden from a more myopic perspective. Ethics and metaethics (along with aesthetics and political philosophy) are subfields of axiology. Axiology is the study of value. Unsurprisingly, then, it is often called value theory. Perhaps the most fundamental question of axiology is whether value is “objective” or “subjective.” These are very common terms in axiological literature. But what do they mean?
An object’s being objectively valuable might mean that it is valuable, regardless of whether any subjects consider it to be valuable. In other words, it might mean that the objectively valuable object would be valuable, even if no subjects ever existed. But this notion of objective value is quite mystifying. Why would a diamond have any value if no subjects were around to care about its shininess and durability? Under this interpretation of objective value, the same unanswerable question can be asked about the value of anything.
Subjectivity, therefore, is necessary for value. Objects are only valuable for or to subjects. Fruits and vegetables are valuable because they are nourishing and tasty. Precious metals are valuable because people enjoy making aesthetic objects out of them, such as jewelry, and also because they are useful for practical purposes, such as toolmaking.
But if value is subjective, does that mean whatever anybody cares about is valuable, merely for that reason? If Joe likes the idea of torturing children to death, does that make torturing children to death valuable? In a sense, the answer is yes. This sense, however, is extremely limited: it only refers to the fact that torturing children to death is valuable to Joe. And it may not be valuable enough to Joe ever to pursue, given other values Joe holds. So to say that an object or action is merely subjectively valuable is not saying much.
In order to speak intelligibly about value, then, we need some way to add an element of objectivity to value, without losing subjectivity in the process. Might the notion of intrinsic value do this job? Consider, for example, the natural beauty of the Grand Canyon. Some philosophers, particularly those with environmentalist leanings, would argue that it has intrinsic value. Intrinsic value, they tend to say, is value the Grand Canyon (and other natural objects) have in virtue of their structure, uniqueness, or other properties. Does it make sense that intrinsic value has some relation to subjectivity?
Yes, in a sense, because subjects have intrinsic value. This is because subjects are the sources of value. To be a subject is, partly, to have desires. To desire something is, in essence, to value it. Desires also beget other desires, in a means-end sense. An aspiring tennis player might desire, for example, to win a tournament. Training would help this person. Training, including the various sub-components of training (such as hydration and a well-maintained tennis court), thus become instrumental desires of the tennis player. These desires constitute the value the person places on winning the tournament and the training necessary to do so. It is even possible that winning the tournament is itself an instrumental desire, such as when the person’s real goals are fame, money, and/or sex. Regardless, all value stems from desire.
It is also worth mentioning that subjectivity is not limited to humans. While humans are the only species to play competitive tennis, many other species are conscious. They also count as subjects. Dogs want to chase tennis balls. Birds want to have their feathers preened. Pigs want to roll around in mud. These are all instances of non-human desires, or valuing.
To the best of our knowledge, plants are not subjects. They do not have nervous systems. Some creatures are borderline, such as sponges, sea anemones, and jellyfish. Even if such creatures are conscious, surely their consciousnesses are very dim. Consciousness comes in degrees. And that means subjectivity comes in degrees, which means the intrinsic value possessed by different creatures also varies in degree. Individual humans, for the most part, constitute a greater, more powerful source of value than individual ants.
There are many interesting questions in philosophy of mind about how we can understand consciousness, particularly others’ consciousnesses. Are ant colonies or bee hives conscious? Is the sum total consciousness of all the bees in a hive a greater aggregate locus of value than that of an individual human? Does it depend on the human? Are artificial neural networks conscious? Is the internet conscious? While these questions are fascinating and worth pursuing, we don’t have to answer them to understand the basic facts that value springs from subjectivity, and subjectivities vary in intensity, so values also vary in degree.
So the notion of intrinsic value helps, in that it helps us see what it means for subjectivity to be the source of value. It does not, however, help for non-conscious entities. Just as with our first suggestion for a definition of objective value, it is difficult to understand how non-conscious entities would have intrinsic value. Would the “intrinsically beautiful” Grand Canyon be beautiful, unless there were a subject to find it beautiful? No, it would not.
This difference in objective and intrinsic value also helps us see another way in which we can link objectivity and subjectivity. Traditionally, objectivity and subjectivity are thought of as opposites, almost like hot and cold. But there is another, more nuanced way of understanding subjectivity. Objectivity typically means that something is factual. It is, for example, objectively true that the Earth is round rather than flat. But aren’t there also psychological facts? Isn’t it a fact about a person’s psychology that they desire, say, chocolate? Isn’t that, then, an objective fact about the person’s psychology? There are facts about subjectivity.
The point of these questions is that the objective-subjective dichotomy collapses under scrutiny. Rather than being polar opposites, subjectivity is a category of objectivity. Even if a person has the subjective opinion that the Earth is flat, rather than round, it is also true, indeed objectively true, that the person believes the Earth is flat. Another way to understand this is that subjects don’t exist without bodies. Subjects are in some sense aspects, manifestations, or products of physical bodies. In other words, subjects themselves are a form of object—subjects are conscious objects. So, in a sense, subjective value is objective value: when a subject values, it is objectively true that the subject values in the way that he or she does.
But this does not seem like the type of objectivity we had in mind. When philosophers speak of objective value, they tend to want something more than subjective value. Indeed, they tend to want something more than intersubjective value. Suppose, for example, that every living human became addicted to heroin. In essence, this means that everyone would desire heroin—and, hence, that everyone would value heroin. Still, many philosophers would say, this does not mean heroin is valuable. In fact, they argue, heroin is harmful because it impairs humans’ ability to pursue other things that they value. Everyone may desire heroin, but that does not mean heroin is desirable. If such a population were to think deeply about the nature of heroin and its place in human life, these philosophers say, the population would (or at least should) reach a reflective equilibrium which, on the whole, rejects heroin as valuable.
This view certainly has its merits. One way to understand it is with reference to relational properties. A relational property is a property that an object has, but only by reference to another object. A key, for example, has the property of opening doors, but only as it relates to a lock. Similarly, some argue, values are relational properties between subjects and the world. Humans, for example, die when they eat large quantities of rat poison. Therefore, rat poison is bad for humans to eat and humans should not eat rat poison. Whatever uses rat poison may have, we should refrain from eating rat poison insofar as we value our lives. Similar, insofar as we value our health, we should eat plenty of fresh fruits and vegetables.
This is the basic idea behind a promising and informative theory of value, which I will call the scientific theory of value. This theory holds that science can tell us what is valuable. The more we study science, especially anthropology, biology, psychology, and sociology, the more we learn about what it is to be human. We learn what sorts of actions and environments tend to result in long, healthy lives for humans. The results of this inquiry, so the scientific theory of value says, are the closest we will get to identifying value in any robust sense.
A major criticism of the scientific theory of value, however, is the widely held view that value statements, or “oughts,” are radically different from factual statements, or “is’s.” It is a common dictum in philosophical circles that “you can never derive an ought from an is.” According to this view, science cannot tell us what is valuable because science only studies what is, not what ought to be. I am puzzled by this view. My befuddlement derives from the same reasoning I used in deconstructing the objective/subjective dichotomy. If one purports to make a true statement about an ought, then presumably that statement itself would also be an is. Take, for example, the proposition that we ought not torture small children to death. That seems like a true statement. Some may disagree, but I believe it is a fact that we ought not torture small children to death. Notice that I just used the word is: it “is a fact that.” So ought statements are also is’s, at least insofar as they are capable of being factual.
But some philosophers deny exactly this. Some philosophers claim that there are no facts about values—or, at the very least, that facts about values are nothing more than subjective statements. Even if these philosophers accept that science produces knowledge, their denial that value statements are truth-functional allows them to consistently hold that is’s can’t imply oughts. This is similar to skepticism about value, but for these philosophers the issue isn’t whether they can know what is valuable, it’s that nothing is valuable. Theirs is a metaphysical position rather than an epistemic one.
This debate, which is sometimes called the cognitivist/noncognitivist debate, lies at heart of metaethics. Cognitivists (such as realists and relativists) hold that value statements are truth-functional, while noncognitivists (such as expressivists/emotivists) hold that value statements are not truth-functional. And this debate can apply variously to value of all types, such as aesthetic and moral value. One could, for example, be a cognitivist about moral value and a noncognitivist about aesthetic value. But just as it is theoretically optimal to have a generally unified epistemology across all areas of knowledge, there are reasons to believe it is better to have an axiology that works for all value types.
Aesthetics
Aesthetic value is another type of value. In a simple sense, aesthetics is the study of beauty. But that makes it seem shallow, as though we are merely concerned with outward appearances. Another way of thinking about aesthetics is the study of art. Some types of art, such as impressionism, are indeed outwardly beautiful, but other types of art, such as film noir, contain deeper themes. Art is such a broad word that it is difficult to define, which makes it problematic to use as a definition of aesthetics. Rather than attempting to provide a perfect definition of art or aesthetics, I simply describe common aesthetic questions and positions.
One question is what constitutes good art. Some believe that the quality of art depends upon the difficulty of producing it. To make good art, in their view, one must spend years developing the skills necessary to make art that others cannot make. Consider, for example, Bob Ross, a beloved art instructor and television personality from the late 20th century who was extremely skilled at painting landscapes. Watching Bob Ross quickly and realistically draw mountains, trees, lakes, clouds, and other landscape features was quite impressive. He had clearly spent years honing techniques for paintings of this type. Very few artists could produce such paintings with the same quickness and ease. Under this view of artistic value, there is at least an argument that Bob Ross was a great artist.
But many philosophers believe this is far from the best way to understand artistic value. A major strain of aesthetic thought holds that creativity is the heart of art. Under this view, Bob Ross’s art, although technically proficient, is mundane and repetitive. If creativity is the true touchstone of aesthetic value, then quintessential artists would be Pablo Picasso, the progenitor of cubism, or Jackson Pollock, the seminal abstract expressionist. Picasso’s cubism is characterized by building several different viewing angles into the same image, while Pollock’s abstract expressionism is defined by its haphazard, and often violent, “drip” technique. These innovative artists developed painting techniques that had not previously existed, paving the way for further artistic development. Although Bob Ross was a master at drawing “happy trees,” his work simply did not inspire new artistic forms.
Another artist worth mentioning is Marcel Duchamp. Duchamp developed the genre of found object art. His most famous work, “The Fountain,” is a urinal signed “R. Mutt 1917.” This work took basically zero technical skill to create. Although its being a urinal made it particularly provocative, the concept applies to any object whatsoever. Found object art constitutes a complete rejection of technique as the central determinant of aesthetic value. Indeed, for pure found object art, technique has no role whatsoever. The primary significance of found object art is that it raises the question of what counts as art. Rather than focusing on technique, or perhaps on artist’s intentions, found object art suggests that art might instead be an attitude, or way of viewing the world.
One particularly interesting example of found object art is the blank canvas. A blank canvas is a complete rejection of technique as the essence of artistic value. Viewing a blank canvas as a work of art is to recognize aesthetic possibilities. Blank canvases contain virtually infinite potential. Any and every painting that has ever been created, along with all paintings that will or might be created, are conceptually contained in the idea of a blank canvas. To understand a blank canvas as art is to recognize that one can take an aesthetic attitude toward any object, or to life generally, insofar as one habituates contemplation of creative possibilities.
While this discussion has primarily focused on painting and sculpture (if one considers found-object art to be sculpture), it has analogues in other artistic media as well. John Cage’s 4’33” is a four minute and thirty three second musical piece that is completely silent. It encourages meditation on the question of what counts as music. Does silence count as music? How about ambient noise? Just as the blank canvas contains all visual possibilities, silence is pregnant with all auditory possibilities. The dance analog would be a dancer simply standing or sitting still for the duration of the dance. The list goes on and on.
According to the creativity view, then, certain artworks don’t really count as art. Some theorists even go so far as to argue that, once we thoroughly understand a certain type of art, it ceases to be art. Put another way, when philosophy digests artistic creativity, and integrates new artistic developments into its aesthetic theory, art must find a new way of being creative in order to escape the inexorable hunger of philosophy. It is no accident that we refer to the experience of artworks as a form of consumption.
Bob Ross’s landscape paintings are again a good example. While the development of representational techniques in landscape paintings were once artistic innovations, use of those techniques eventually lost their creative aspect, becoming instead mechanistic reproductions of the same concept. And this is part of why Andy Warhol’s mass-produced “pop art” was theoretically interesting. It took mechanistic, consumer-oriented reproduction as its defining feature. By appropriating mechanized reproduction as a form of creativity, Warhol forestalled the ability of philosophy to comprehend and integrate his work. In a sense, Warhol’s pop-art created aesthetic indigestion. Similarly, the contradictions of unpainted paintings, silent music, dance without movement, etc. are all ways in which art preserved its independence by rebelling against rationality. Indeed, abstract art as a whole can be understood as a resistance to the philosophical project of explaining art.
Aesthetics is a fascinating and complex field that includes not merely questions about what constitutes art, but what art’s functions are. Many believe that art is a form of expression. This expression can be conceptual, emotional, political, or any number of other adjectives. Art can teach moral lessons, unearth hidden facts about our psychologies, inform us about scandals, serve as a means of dissent, and help people work through trauma. The ways in which art can perform these functions are multifaceted and subtle, as are the various ways of understanding art’s value. Aesthetics overlaps with ethics, philosophy of mind, political philosophy, and various other areas of philosophy.
Philosophy of Language
Philosophy of language underwent tremendous development in the 20th century and is currently considered one of the major branches of philosophy. It has connections to linguistics, but is less empirical and more theoretical. It would be reasonable to say that philosophy of language is to linguistics as philosophy of religion is to religious studies. Philosophy of language focuses on abstract features of language such as the relationships between language, concepts, thought, and the world. At the center of philosophy of language is the subfield of semantics, which is the study of linguistic meaning. We have just seen one example of semantics: possible worlds theory is also known as modal semantics.
One reason philosophy of language is so intriguing is that it has particular bearing on what it means to be human. While some non-human animals have limited linguistic abilities, humans are distinguished by our extraordinarily complex and highly evolved ability to communicate, which includes our ability to make written records of our discoveries for future generations. While non-human animals often share our physical senses, they lack our capacity for abstract thought, which is made possible by language. In a sense, to understand the nature of language is to understand ourselves.
Language is a tool humans have developed. This is a core insight in the self-reflective enterprise that is philosophy of language. Understanding the conventionality of language is crucial to understanding philosophy of language. What we use our words to do, and how they do the work they do, are perhaps the field’s central questions. Investigating the functions of language tends to help solve, dissolve, or at least clarify, traditional philosophical problems.
Consider the free will debate. In the section on metaphysics, we discussed compatibilism and incompatibilism, which are competing positions on whether free will is compatible with determinism. Some people have a hard time understanding how free will could be compatible with determinism. But focusing on how the term “free will” and related terms such as “voluntary” and “coerced” are used helps us make sense of compatibilism.
Consider the following argument. If determinism is true, there is no free will. Determinism is true. So, there is no free will. But if there is no free will, then there is no moral responsibility. Thus, there is no moral responsibility. And if there is no moral responsibility, our punitive practices are not justified and we should abandon them entirely. Therefore, our punitive practices are not justified and we should abandon them entirely.
This argument, sometimes offered by hard determinists, has an intuitive force. If it is true that we should not be held responsible for acts that are not free, and none of our actions are free, then we should not be held responsible for anything. Praise and blame both go out the window. But the conclusion of the argument is so repugnant that something must be wrong with the argument. It is simply unacceptable to say that nobody deserves praise or blame for anything, no matter what heinous crimes a person commits. It would be highly unadvisable to simply let mass murderers continue roaming the streets.
Focusing on use helps locate the problem. It is indeed true that we do not hold people responsible for actions they were forced to take. Consider a gunman who threatens to kill another if they do not commit some crime, such as hacking into a government computer. A duress defense is completely appropriate in such a circumstance. While we would hold the gunman responsible, we would not hold the hacker responsible. Brain tumors are similar. With medical information that an otherwise law-abiding citizen’s violent act was caused by a brain tumor that affected the parts of the brain responsible for aggression and impulse control, combined with the report of a successful surgery which fixed the problem, it makes perfect sense that we would not find the perpetrator criminally responsible for their violent act.
The commonality in each of these cases is that, while these particular acts were not free, under the right circumstances they could have been free. In other words, we usually hold hackers and violent offenders responsible for their offenses. Free will, and all related terms, are part of the network of concepts we have developed for holding each other accountable. An action is free precisely when it is performed under conditions that make the action a candidate for praise and blame; an action is not free when it is performed under conditions that prevent the action from being a candidate for praise and blame. Roughly put, if most ethical, reasonable, law-abiding citizens would have been compelled to perform the illegal act in question, in similar circumstances, then it was not a free act.
Free will, then, only makes sense when we apply it locally. To think of free will in such a way that our entire network of value-terms fails us is to misunderstand the role of free will. Asking whether we have free will, in a grand, universal sense is at best a perversion of language, and at worst incomprehensible. I must admit to not understanding what hard determinists even think they are talking about when they say that, if determinism is true, there is no free will. Their definition of free will, because it is completely detached from the reason we have the term free will, makes about as much sense as saying that it is noon on the sun. Understanding the relationship between use and meaning, however, helps us see that incompatibilism is a confused, metaphysical quagmire. It is for this reason that most contemporary philosophers are compatibilists.
If this is correct, then focusing on use as a route to understanding meaning is promising. In fact, it is so promising that it inspired the philosophical movement known as ordinary language philosophy. If use-based semantics can successfully dissolve a problem as longstanding and intractable as the free will problem, then maybe it can help us see through our primitive metaphysical confusion on other matters. But the extent to which this project is successful is a matter for debate. A minority of philosophers believe that it misses the mark entirely, even for the free will problem. Some believe that, while it solves the free will problem, it doesn’t do much more than that. And some philosophers believe that use-based semantics, while not quite a philosophical panacea, at the very least clarifies traditional philosophical problems, if not philosophy as a whole.
Another insight from use-based semantics can help with ontology. Back in the metaphysics section, I mentioned monism, or the view that everything is really one sort of thing. This was contrasted with dualism, or the view that there are two fundamental substances. Historically, matter and mind have been the two primary candidates. More colloquially, dualism is often understood as the view that mind and matter are separate, while monism is typically the view that everything is physical (physicalism) or that everything is material (materialism). More rarely, monists hold that everything is mental.
I have claimed to believe that mind and matter are distinct because mind is not reducible to matter. Now that we have studied some basic philosophy of language, I can make the argument more persuasively. The key insight is that many of our words, particularly adjectives and nouns, only have meaning insofar as they make distinctions. They gain meaning by differentiating between objects or types of objects. Monisms attempt to apply a single term to everything, which renders the term unable to make distinctions, thereby destroying the term’s use, and hence its meaning. Consider the adjective “physical.” Physicalists claim that everything is physical, including minds. But if everything is physical, then what work does the word “physical” do? In ordinary language, the physical is contrasted with the mental. This distinction has uses. Rocks are physical, but thoughts aren’t. Claiming that the mind is physical eliminates this distinction, making one wonder why one would ever use the term “physical.”
Another way to think about this is through the dual concepts of extension and intension. These are technical terms with very specific meanings. The extension of a term is all of the specific objects to which it refers. The intension of a term is the abstract concept tying together the objects in the extension. Consider, for example, the word “planet.” Mercury, Venus, Earth, Mars, Jupiter, Uranus, and Neptune are the eight planets in our solar system. Many other planets exist, of course, but I won’t try to name them all here. The important point is that the set of all planets is the entire extension of the word “planet.” The intension of planet is something like large, round, dense ball of matter circling a sun.
“Planet” is a particularly useful example because scientists used to believe there were nine planets in our solar system. Pluto used to be classified as a planet, but was later relegated to sub-planet status because of its smaller size. This is interesting because it shows the relationship between the extension and the intension. When Pluto was considered a planet, it was possible for planets to be as small as Pluto. But with the redefining of “planet,” Pluto was excluded. Another way to think about it is that exclusion of Pluto, by changing the extension of the term “planet,” thereby altered the term’s intension. Put more abstractly, there is interplay between a term’s intension and extension.
Consider how this might apply to the term “physical.” Suppose we first use “physical” to refer only to the non-mental. In that case, the non-mental is its extension. It also has a corresponding intension, which is constituted by the best theoretical unification we can derive from the collection of everything that is non-mental. But if we then assert that the word “physical” describes everything, even the mental, we change the term’s extension, which has ramifications for its intension. The meanings of terms are not static; they change with changes in use. Physicalists err in believing they can keep the intension everyone is familiar with, despite a radical extensional change. Ironically, by vaunting their favored word to universalized status, physicalists render it useless, and hence meaningless. The same is true for all monisms.
One final way of seeing this point is to ask what can meaningfully be said about everything. And by everything, I mean absolutely every single thing—minds, matter, time, space, properties, propositions, possibilities, abstract objects, the contents of imaginings, etc. One might frame this as the question of whether one can say anything meaningful about “being,” thought of in its most general sense. The answer is no. We can break down “everything” into its components, but when we try to make statements that are true about everything, we run up not only against the limits of language, but the limits of conceivability.
It’s worth mentioning that the relationships between our most general concepts, such as actuality, being, existence, everything and reality is its own philosophical subfield, known as metaontology or metametaphysics. It is a dense fusion of logic, metaphysics, and philosophy of language. Although I have read into this field and thought much about it, in the end all I can really add is the following joke: What’s the difference between reality and actuality? Reality is how things are, while actuality is what you convey to people when you correct them. Gotta love metaontological humor, right?
Getting back to use-based semantics, it also confirms what we learn from Descartes, namely that false knowledge exists. Descartes posed the question through methodological doubt, essentially asking what the scope of our knowledge would look like if knowledge required no possibility of error whatsoever. Because that method resulted in us having essentially no knowledge, we were able to conclude that knowledge is compatible with error.
But there is another route to the same conclusion. We can observe commonly accepted uses of the word “know” and its variants, then determine the term’s meaning by theoretically unifying the examples. This is the process of deriving the intension of the word “knowledge” from its extension. Through this process, we learn that it is acceptable for a person to claim knowledge when they have strong, albeit fallible, reasons for believing what they claim to know. In other words, an extensional analysis reveals that the application conditions for attributions of knowledge are met even when we are not 100%, unquestionably certain.
Consider again the example we discussed earlier of the sports fan who supports the best team in the league and claims to know that this team will beat the worst team in the league. If sports analysts give that team a 99% chance of winning, the sports fan is justified in claiming this knowledge. Science, after all, is typically satisfied with a 95% confidence interval. But what about when the rare upset occurs? Do we then say that the fan of the best team didn’t really know that the highly favored team would win, because that turned out to be false? According to a use-based analysis, he did know, he was just wrong. His use of the term was socially acceptable and met all application conditions. It counted as knowledge, even though it turned out to be false: it was false knowledge.
We can analyze the term “truth” in the same way. We call a proposition true when we think we have enough justification for believing it. Surely, when a team has 99-to-1 odds, nearly everybody with knowledge of the situation takes it to be true that the heavily favored team will win. The fact that there is such strong evidence for the proposition justifies individual judgments, and the social judgment, that it is true that the heavily favored team will win. If somebody were to make the claim that the heavily favored team was going to win, very few people would feel the need to argue, and nearly all would agree. But when the underdog wins, the oxymoronic possibility of false truths emerges, which is even more radical than the notion of false knowledge. And insofar as truth is part of the definition of knowledge, this might explain where the notion of false knowledge comes from. According to use-based semantics, truth and knowledge are flawed ideals which primarily function to help us communicate to each other our level of confidence in various propositions.
Since use-based semantics seems to endorse false truths, however, we have reason to reject use-based semantics. As we discussed in the logic section, contradictions imply anything and everything. Something cannot be both true and false at the same time. It is prudent, then, to consider other semantic theories. Two other types of semantic theories are “coherentist” and “referential” theories. Coherentism conceives of linguistic meaning as a complex set of relationships between the various parts of our language—it focuses on how our terms cohere with each other. Referential theories try to ground language in the physical, or perhaps in the senses, focusing on “reference,” where what a term “refers to” is roughly understood as the term’s extension. But coherentist and referential theories don’t play well together; in other words, the concept of reference does not cohere well with coherentism because referring terms gain their meaning by what they refer to rather than through the linguistic network.
No semantic theory is perfect. This is at least partly because we have to use language to understand language; we cannot step outside language and analyze language from a non-linguistic perspective. To use a common metaphor, it is as though we are stuck at sea on a boat, forced to repair our boat plank by plank rather than building it from scratch. Extending the metaphor, studying philosophy of language is like studying a schematic diagram of our language. We learn about language’s structure and how that structure enables language’s function. We also learn where language might break down and, if we are lucky, we might discover how we can improve language. Clearer concepts beget clearer thinking.
Ways of improving language are called “revisions.” Revisionist philosophies of language argue for changes, sometimes large changes, in how we use our terms. Amending our definition of “planet” to exclude smaller bodies such as Pluto is an example of linguistic revision. One small suggestion I have is that we begin using “suicide” as a verb. Rather than saying that somebody “committed suicide,” we should merely say that they “suicided.” The word “committed” carries with it a connotation of blame. It is particularly cruel and insensitive to blame somebody who was in such dire straits that they decided to kill themselves. By verbing suicide, we encourage empathy for the emotional issues involved. And yes, using the noun “verb” as a verb—verbing verb—is yet another revision.
Normativity
Now that we have discussed the fundamentals of philosophy of language, we are finally in a position to discuss metaethics. But a crucial word is missing: “normativity.” This word is commonly used by philosophers, with various interpretations. At its most basic level, “normativity” refers to the existence of social norms. One norm is that people in our society provide overnight guests with their own towels. That is a matter of etiquette. Another norm is that people in our society tend to look after their elderly parents. That is a matter of morality. And yet another norm is that people in our society drive on the right side of the road. That is a matter of law. Etiquette, morality, and law are all normative.
There are also linguistic norms. People in our society tend to say “hello” when we answer the telephone, and to say “excuse me” or “I’m sorry” when we accidentally bump into somebody. In a sense, all of language is normative because it is a conventional set of behavioral practices regarding how we signify meaning through noises and writing.
But philosophers tend to mean something more when they discuss normativity. Rather than simply describing a set of practices, or how things happen to be done, philosophers often use the term to describe the action-guiding character of our practices. We say “hello” when we answer the phone, in part, because that is what we are expected to do, or are supposed to do. The same goes for the other norms we just identified, as well as the enormous number of norms we have not specifically described. The essence of normativity, these philosophers hold, is the feeling of it being somehow good, or right, to behave in certain ways, which includes using words in the way that we do.
One way of understanding this is to think about etiquette as important, morality as more important, and law as most important. Law, under this view, is morality on steroids. While morality deals with the ways in which we should behave, law specifies the ways in which we must behave. Morality is an informal system which regulates behavior for the social good; law is the formalized, structured system for regulating, through public coercion, the behaviors we consider to be crucial for society to flourish.
But some have a knee-jerk reaction to this view, holding instead that morality is what we are truly after, and is itself the source of law. Debates about the relationship between law and morality are endemic to philosophy of law, which is a branch of axiology. A major argument in that field is that sometimes laws are unjust, such as Nazi law, and when they are we have a moral duty to change the law. Some philosophy of law theorists go so far as to claim that, since morality is an inherent aspect of law, Nazi law did not even count as law.
While I do not pretend to solve those problems here, it is nonetheless useful to introduce a term of my own design: should-musts. By introducing should-musts, I intend to create a term for evaluating laws. We should have a law against killing small children. That is a should-must: it should be the case that we must not kill small children. But Nazi laws, which required killing small children, are not a should-must: it should not be the case that we must kill small children. Nazi laws could be called should-not-musts, or perhaps should-must-nots. The point of this exercise in normative revisionism is that we need some way of distinguishing between different types of normativity, because we can assess one normative realm from the standpoint of a different normative realm.
Going further with this line of reasoning, I now introduce the term should-shoulds. Consider the United States circa 1800. At that time, slavery was considered both legal and morally justified. According to at least one metaethical theory, namely a simple form of cultural relativism, slavery in the United States circa 1800 was not only legal, but it was also morally permissible, and perhaps even morally required. From our cultural perspective, however, in the 21st century United States, we say that slavery is morally forbidden. While it may have been conventionally true in the United States circa 1800 that slavery should have existed (or at least should not have not existed), from today’s perspective slavery in 1800 was not a should-should. From our perspective, slavery in the United States circa 1800 was a should-not-should, or perhaps a should-should-not. In other words, we believe that slave-owning societies have the wrong values.
To put the point more generally, we need some way of critiquing current social values. And this thought can help us navigate the various metaethical theories on the market. At their heart, metaethical theories seek to guide us through the maze of normativity, but they usually encounter dead ends. As usual, philosophical theories regarding complex topics are rarely free of problems. Cognitivist theories (which hold that evaluative propositions are truth-functional) have difficulty explaining the relationship between ethics and subjectivity, while noncognitivist theories (which hold that evaluative propositions are not truth-functional) struggle to provide reasons why we should care about ethics.
At the very least, metaethical theories are subject to critiques of inconsistency, just as are theories in any other field. With this in mind, let us develop a slightly more nuanced form of cultural relativism. Consider again the United States circa 1800. This was well after the Declaration of Independence, which affirmed our nation’s belief in every man’s right to “Life, Liberty, and the pursuit of Happiness.” It is difficult to understand how slavery can coexist with such a commitment. On its face, it seems like a contradiction.
This isn’t a problem for simple cultural relativism because simple cultural relativism asks the following question: “For any evaluative judgment, what would be the result if we took a poll of the population in question?”. So, in the United States circa 1800, if a poll for the question “do you support life, liberty and the pursuit of happiness?” came back yes for the majority of the population, then life, liberty, and the pursuit of happiness were valuable in the United States circa 1800. Similarly, if a majority of the population would have said that slavery is morally permissible, then slavery was morally permissible in that culture, regardless of the apparent contradiction between these values.
This is a rough analysis for a couple reasons. First, a bare majority is hardly determinative of social judgment. It probably makes more sense to say that some significant threshold must be met, such as 55% of the population, with the value being indeterminate within some specified range surrounding 50/50. Second, some individuals in the population might count more than others, such as intellectuals, lawmakers, or other social leaders. Questions about how to operationalize and measure social value judgments are practical problems for sociologists. For our purposes, all that matters is that it could be done.
The more nuanced cultural relativism I have in mind, which I call reflective relativism, does care about the contradiction between slavery and a commitment to life, liberty, and the pursuit of happiness. It is a practical contradiction to hold liberty for all as a fundamental value, yet at the same time to support slavery. Either the society does not really care about life, liberty, and the pursuit of happiness, or the society does not really support slavery.
Cultural dissonance of this type is best worked through with reference to the culture’s other beliefs and values. One way that the United States circa 1800 attempted to resolve this tension was by designating blacks as sub-human. Blacks were often compared to monkeys or apes and theories abounded about the biological differences between the “species” of blacks and whites. Phrenology, or the “science” of determining intelligence based upon skull structure, was a notable manifestation of this attempted distancing of slavery from the society’s fundamental ethical commitments. Eventually, it became clear that the contradiction was ineliminable and, the values of life, liberty and happiness being more deeply held, slavery had to go. This movement toward equality, which came to a head with the civil war, was later reinforced by various scientific discoveries showing that blacks and whites are indeed members of the same species, with common ancestors and virtually identical biology.
From the perspective of reflective relativism, then, it was a should-should for the United States circa 1800 to overcome slavery. Because the ethical principles which slavery violated were so fundamental to the culture’s values, the culture could be critiqued internally. In other words, reflective relativism has the capacity to say that a culture is wrong about its own values, if those values are inconsistent with what we might call a reflective cultural equilibrium. Reflective relativism can also be applied to language, insofar as presuppositions or implicit values in one area of a culture’s language violate more deeply held commitments of that culture. And this is particularly interesting for uses of normative terms.
Normative terms are somewhat unique in that they do not pick out objects in the way that some terms do. When we name something, such as a “tree,” we can often go look at it, touch it, smell it, and experiment on it. But quintessentially normative terms such as “good,” “bad,” “right,” and “wrong” aren’t like that. While there are physical particles such as protons, neutrons, and electrons, there are no normative particles. Some philosophers even jokingly call hypothetical moral particles “morons.”
But just because normativity isn’t physical in a straightforward sense doesn’t mean that it isn’t real, or that evaluations are always false. I favor a voting model of normativity. Our normative terms are our own creation. The meanings of the terms “ethical,” “unethical,” “moral,” and “immoral” are for us to decide. At the very least, there are facts about what constitutes correct use of the English language. It would clearly be false, for example, to say that it is always immoral to eat. Everybody understands that eating is necessary for survival. For this reason, I reject noncognitivism, even though I appreciate the role subjectivity, particularly desires, play in the determination of what we individually care about, and in how our desires fuse together to form the socio-linguistic-behavioral complex that is normativity.
Because there are at least some clearly defined application conditions for uses of normative terms, I identify as a cognitivist: I believe that at least some ethical statements are capable of being true or false. While some may be borderline, or indeterminate, it seems ridiculous for us to say that there is simply no fact of the matter about whether it is morally acceptable to torture babies to death. At the very least, everyone you know would disagree. Moreover, if you were to torture a baby to death, and people found out out about it, you would be shunned. You might also be imprisoned or killed. Some disagree that this counts as objectivity, but if normative propositions cannot be true in this way, then it is very difficult to imagine what would be required to make normative statements true.
Reflective relativism is a form of cognitivism which respects the conventionality of normativity. Moreover, reflective relativism is consistent with a scientific foundation for which social practices tend to produce social flourishing. There is no logical requirement that different cultures actually have different normative systems. Assume that human biology and psychology are uniform enough that the same sorts of ethical principles happen to develop universally, in all societies. Reflective relativism could still hold that values are relative to cultures, in a deep, integrated sense, there just wouldn’t be much variation between cultures. All that reflective relativism requires is that, if there were cultures which valued differently, or in other words used evaluative terms differently, that those uses would be true in those cultures, as long as they were consistent with other values held by those cultures.
As a final question, you might ask what this means for you, as an individual. If your society supports practices with which you disagree, do you have to endorse those practices? In a sense, the answer is yes. If you break the law, for example, it is prudent to expect negative consequences to follow. There can also be negative consequences for acting immorally, even if you personally don’t agree with society’s moral judgments. If conforming counts as endorsing, then in some ways you are indeed compelled to endorse the status quo.
In another sense, however, the answer is no. Because normative truths are largely conventional, you have a say in how normative conventions evolve. A vegan, for example, might strongly claim that any use of non-human animal products, including using animal excrement to fertilize crops, is immoral. That is false because the vast majority of our culture supports symbiotic relationships between humans and non-human animals, and our culture has no more deeply held value or set of values that conflicts with symbiotic relationships with non-human animals. All it takes is one overburdened sheep to disprove veganism’s ethical claim.
Even so, when a vegan argues that we should never use non-human animal products, perhaps because it violates their autonomy, the vegan is adding his or her voice to the mix. The vegan is making it known that, in his or her opinion, we should-should ban any and all use of non-human animal products. Even though it is false, according to our normative language and practices, that it is always immoral to use animal products, this fact is only constitutively true: its truth is constituted by the fact that we, as a society, choose and develop our norms. The vegan urges us to change our language and our practices, in effect voting to change the facts about what we should do. The vegan’s use of “should” is wrong, but that does not mean it could never become right. This is one connection between subjective and objective value.
In general, then, each of us has a say in our shared values. In various ways, perhaps through linguistic revisionism, we project our values onto the world and make them real. As I argued earlier, subjectivity is a form of objectivity. And intersubjectivity is even more objective than mere subjectivity, partly because truth is itself a conventional term. In this sense, philosophy is continuous with psychology, sociology, and politics. We achieve wisdom, both as knowledge and as an action-guiding principle, not only through our own experiences, but also through the sustained, open-minded investigation and integration of a wide variety of academic fields, which are themselves built upon generations of human experience, inquiry, innovation and reflection.