We all have a set of beliefs. Our beliefs guide us and inform our decisions.
We hold some of those beliefs without sufficient justification. Some are held despite contradictory evidence. It would seem useful to conceptualize a Venn diagram that represents all beliefs that an individual holds as true, and then within that set, subsets of belief that are held with sufficient justification. We may call these subsets "knowledge".
It has been said that knowledge is "justified true belief". This however implies that we have access to absolute Truth (with a capital "T"). While this may be argued to apply to "Truths" such as math or abstract concepts, it cannot be applied to scientific concepts where, as we shall see, can only be held as provisionally true. To avoid confusion, let's simply refer to "knowledge" as "justified belief".
Perhaps we can then further subdivide knowledge into categories. One may be those justified beliefs that are not dependent on observation, induction and deduction. These may be called a priori. According to the Internet Encyclopedia of Philosophy, "A given proposition is knowable a priori if it can be known independent of any experience other than the experience of learning the language in which the proposition is expressed, whereas a proposition that is knowable a posteriori is known on the basis of experience."
The other broad category of knowledge may be called a posteriori. "(A) proposition that is knowable a posteriori is known on the basis of experience."
A priori knowledge may include one's thoughts and abstract ideas. Descartes (see below) realized that he could be absolutely sure of his own thoughts and existence. An individual's thoughts concerning his or her own existence, tastes and preferences cannot be doubted by the individual. "I like blues" may reflect an individuals taste in music. Others would have to take the individual's word for it, but the individual can know his or her own thoughts without doubt.
Abstract knowledge includes concepts such as mathematics. "1+1=2" is true regardless of one's thoughts about it. Abstract concepts are trivially true. Consider democracy. Democracy is a defined concept. It may have originated in someone's thoughts, but we all now recognize the concept by its definition. We will discuss this later in other parts of the site (see Science and Morality).
Empiric knowledge requires observation. In the What is Science section, we see that facts are "confirmed observations". Laws are observed relations between entities (eg. force = mass x acceleration). Theories come from hypotheses that have survived proper scientific scrutiny. These all exist in a way that is verifiable. We can only hold empiric knowledge tentatively. It is subject to revision when faced with new information.
Note that there may be conceptual overlap in the types of knowledge. For instance, some have debated whether mathematics was invented (abstract) or discovered (empiric). As we will see in the Science and Morality section, debates have occurred concerning whether science can determine values. As we shall see, these questions really come down to definitions. It seems likely that there will always be some disagreement on such things, so for now, the diagrams may overlap somewhat.
However, to the philosopher, the real division to consider is that between "Is" and "Ought".
Philosophy and Science are related, but different concepts.
As health care providers, we are concerned with some basic philosophical questions about knowledge and the information we use. What should we consider to be knowledge? How should we get our information? What should we do with the information? How should we decide if the information is useful?
Philosophy literally means "love of wisdom". For our purposes, it is concerned with "ought" or "should" questions. What should we do? Why should we do it? Why should we think something? How should we think about it? These are moral questions that guide our decisions and behavior.
Science is a way of knowing what "is". It is concerned with the actual state of things. The nature of science is expanded upon in the What is Science section. For now, let's consider it as a source of information, or a 'way of knowing'. Over the years, much confusion and controversy has stemmed from the failure to differentiate "is" from "ought".
Philosopher David Hume, in his Treatise of Human Nature, pointed out that, in nature, if something is a certain way, then it doesn't necessarily follow that it ought to be that way. 'Hume's Guillotine' (or just 'Hume's Law') put a wedge between descriptive statements of fact ("is") and prescriptive statements of value ("ought"). He observed that moral authorities often begin their arguments describing the world as they see it, and then, almost imperceptibly, equivocate their descriptions with how the world ought to be.
Hume observed that an author, when writing about ethics..."makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not."
A concept related to Hume's Law is the Naturalistic Fallacy, coined by philosopher G.E. Moore (Principia Ethica, 1903) which states that a thing is not necessarily good just because it is natural. The concept of "goodness" is a moral concept concerning human values. The natural state of the world may or may not cohere with the moral state of "goodness". Hume's Law and the Naturalistic Fallacy are important concepts for decision makers to understand. Scientific facts, laws and theories are important sources of information for policy makers, but ultimately policy decision making is the subject of Philosophy and Ethics.
For instance, Natural Selection is the current foundational theory of modern biology. It is supported by broad lines of evidence that converge on the conclusion that Natural Selection is the best explanatory model for the evolution of species. In the early 20th century, the theory of Natural Selection was used to justify Eugenics programs in Europe and America. Proponents mistook the 'is' of a natural process for an 'ought' of a value system. The results were unspeakable. Clearly, proponents of Eugenics did not understand Hume's Law.
However, there is a connection between knowledge and philosophy. Without knowledge, we would have no information with which to make value judgments. Philosophy and science are different, but they are useless without each other. As we will see below, their relationship is not exactly straightforward.
(Although, observation-based belief may not always be intuitive, even for Aristotle. It is said that "Aristotle maintained that women have fewer teeth than men; although he was twice married, it never occurred to him to verify this statement by examining his wives' mouths.")
Philosopher Bertrand Russel put it this way:
"Science is what you know, philosophy is what you don't know."
** Reason should be informed by observation.
Philosophers recognize several "ways of knowing" besides empiricism. Proposed systems of Ethics may provide knowledge about morality and how one should behave. Empathy allows one to understand how others are feeling. Aesthetics describes our awareness and feelings about the immediate surroundings.
Non-philosophers have favored other "ways of knowing".
On this site, we are concerned with obtaining factual knowledge to inform our practice of medicine. Consider these systems of medicine:
1. Tradition Based Medicine - Should knowledge come from tradition? Should we adopt practices because they have been used over and over again throughout history?
2. Experience Based Medicine - Should knowledge come from personal experience? If it worked for me before, should I continue doing it?
3. Authority Based Medicine - Should knowledge come from authority figures? If a prominent figure in our culture promotes an idea, should we adopt it?
4. Evidence Based Medicine - Should we get our knowledge by making observations, forming general ideas from the observations, empirically testing these ideas, keeping the ones that work, discarding the ones that don't, and then following the evidence wherever it leads?
Hopefully, readers here are drawn to the last proposal. It seems reasonable to look to science as a source of knowledge to inform our clinical practice. However, this was only a recent phenomenon. In 1910, Abraham Flexner studied the state of affairs of medical education in America. He found that scientific thinking was not a common thread, but rather there were many "sects", each driven by some pre-conceived idea.
"Prior to the placing of medicine on a scientific basis, sectarianism was, of course, inevitable. Every one started with some sort of preconceived notion; and from a logical point of view, one preconception is as good as another."
Today, most agree that science, evidence and compassion are the "preconceived" notions that should inform the practice of medicine.
We naturally seek patterns in things encountered in our environment. It is natural to attach significance to those patterns. Occasionally, the pattern will be real and recognition of such a pattern proves beneficial. It is natural to form broader ideas from individual observations.
It is not so natural to differentiate real patterns from randomness, or to know which ideas are true and which are false. Michael Shermer, in his book The Believing Brain, expands upon the notion of pattern seeking and belief building. There he argues that we cannot help but link observations together into patterns, thereby forming beliefs. We form the beliefs first, and then go back to justify them.
Unfortunately, we are biased towards our beliefs and often justify them with fallacies.
** The process of forming new, broader ideas from perceived patterns is called induction.
Induction takes the thinker from the known to the unknown. It is speculative. Inductive reasoning is not certain. It is prone to error. However, without induction, we cannot formulate new ideas at all.
Some like to use the term 'induction' specifically to refer to statistical inferences. For example, if 90% of red-haired ballet dancers wear seat-belts when driving, and Susan is a red-haired ballet dancer, it would be reasonable to infer (tentatively) that Susan wears her seat-belt. Thus induction takes observed data and known statistics to infer a conclusion that, statistically, may be true.
A doctor may counsel a patient about risks using inductive reasoning. A child with a sore throat, who also presents with fever, no cough, white patches on the tonsils, and very swollen lymph nodes in the neck, has a greater than 50% chance of having "strep throat" (Centor Score of 4). It is reasonable to infer that the cause of the sore throat is Group A Streptococcus and to begin treatment.
Sometimes, statistical data is not available. When we make devise a likely explanation for cold data, without any other information, we essentially are making a guess. Guesses are sometimes the only kind of inferences possible. Critical thinkers will choose their guesses based on the most likely scenario. For instance, if one sees brown broken glass on the side of the road the morning after a big football game, a likely explanation might be that a partying fan tossed a beer bottle out of the car window the day before. Of course there may be other explanations. Someone may have purposely scattered the brown glass in hopes of causing flat tires. However, the second and other potential explanations require more assumptions. They are less likely.
A doctor may be unaware of published statistics when making a diagnosis. All she may have are a set of symptoms and physical exam findings. Crushing chest pain with exertion in smoker with known heart disease and a normal chest x-ray may be due to angina. It also may be due to heartburn, pneumonia, a sore muscle or other issues. In this case, the doctor makes an inference to the best explanation from the raw data that is present. She may consider heartburn and sore muscles, but she puts angina at the top of the list (differential diagnosis). Angina fits the data without having to make new assumptions.
When we infer to the best explanation, we essentially use Occam's Razor. This will be discussed in the What is a Skeptic? section.
Charles Sanders Peirce called referred to the inference to the best explanation as "abduction". Abduction and induction both are processes in which we infer explanatory ideas from observations. Some consider abduction a subset of induction. Others prefer to keep them separate.
But whether we make a statistical inference or use Occam's Razor to choose an explanation, the basic gist is the same. For our purposes here, we will simply refer to the process of forming uncertain, explanatory ideas based on known observations and facts as 'induction'.
Inductive reasoning lies at the beginning of the Scientific Method and is responsible for law and theory building.
All that matters is that we have a systematic way to test the ideas, so that bad ones can be discarded, and good ones kept. However, one might reason (inductively) that we also need a kind of screening method for selecting ideas worth testing. Ideas that are implausible - that conflict with established knowledge - may likely not be as productive as plausible ideas. That is not to say that extraordinary ideas should be ignored, but rather, put into perspective. We will discuss "prior probability" in the What is Science section. In science, inductive ideas should follow from implications of established facts, laws and theories, or follow from new and empirical observations.
Most would agree with Feynman that we need a systematic, reliable way to test new ideas. Testing ideas requires a different kind of reasoning; one that has more certainty.
** The process of making specific inferences from broader ideas is called deduction.
Deductive reasoning is used in hypothesis testing. While induction is never certain, deduction is more mathematical. Deductive reasoning takes a syllogistic "if...then" form. In deductive arguments, the conclusions are necessarily true (mostly) if the premises are true. With deduction, we make make inferences from the general to the specific (remember - with induction, we infer from the specific to the general).
In science, theories and laws originate through induction. They are tested with deduction. Deductive reasoning is used to form hypotheses - testable "if...then" statements - about our theories. When we use deductive reasoning, we make conclusions that would be surprising if not true.
For instance, the child with sore throat described above likely has strep throat. We could test this idea deductively: "If the child has strep throat, then his throat culture should be positive." If the throat culture turned out to be negative, then that would be surprising. In such a case, we would have to question our initial theory. (Note: confirming a diagnosis with a test may strengthen an idea, but does not necessarily prove it. See Affirming the Consequent).
Without deduction, we cannot test the validity of our ideas.
Induction and deduction are important concepts in the understanding of science, logic and argumentation.
This paradox was pointed out (once again) by David Hume, and is known as "the problem of induction". Induction cannot be justified logically. For science to function, the 'problem of induction' is a paradox that we may simply have to get over with an epistemological leap.
Inductive reasoning depends on an intuition that the past will resemble the future. The laws of nature that influenced past events will continue to influence future events in the same way. Philosophers call this the 'Inductive Principle' or the 'Uniformity Principle'. The whole problem of induction lies in realizing that we have no access to this knowledge a priori. We have no reason to believe that the past will continue to resemble the future other than our observation that past principles have held up so far (to the present).
Since we have no prior knowledge to presume that we can deduce future events from the observation of past events, the process of induction is not logically justified. However, inductive reasoning is the only way that we seem to be able to get our knowledge. We all use inductive reasoning every single day. We assume that a baseball will strike an object with the same force every single time if it continues to have the same mass and the same acceleration. We assume that we will see the sun at the horizon tomorrow morning. We assume that being hit by a truck will be bad. We assume all of these things because they have always worked in the past. Inductive reasoning depends on nature behaving in a uniform way all of the time, past, present and future. Science proceeds through inductive reasoning (and checked by deductive reasoning). Therefore science depends on the Inductive Principle. Science depends on nature behaving the same way all of the time. We have no logical reason to do this, but, for lack of better phrasing, we are stuck with it. Our brains work this way.
Once we understand the problem of induction, we are then free to actually choose to accept it as a first principle (or to reject it).
Other principles cannot be rationally supported due to circular reasoning. For instance, democracy is a system that allows members of a society to contribute equal votes to arrive at a collective decision. The justification for such a system is supported by the fact that the majority of people in such a society wish to have democracy. In a sense, democracy supports democracy. We choose to value democracy not by logic, but by value.
Philosophers of science, including David Hume, endorse using inductive reasoning. The decision to use science / induction is a value judgement. Either we hold the basic belief that the scientific method (by way of inductive reasoning) is superior to other "ways of knowing", or we do not. If one accepts this premise, then one must adhere to the principles of science when pitting one testable claim against another.
** One cannot embrace science when it supports a favored idea, and then reject science when it does not.
A key assumption in the scientific method is that events have definite causes. Theories are built around the idea that A causes B. Experiments are then set up which either confirm or falsify the idea. It follows from the principal of induction that, at best, we can only infer that A causes B. We cannot prove it in the absolute sense. "Confirmation" of the idea (A causes B) simply means that whenever we observe event A, we then observe B. In science, the concept of causation goes beyond simple correlation in that the relationship of events A and B are so connected, that all other explanations for event B are controlled for and ruled out.
We can prove that A does not cause B through falsification. Indeed, the idea of Falsification was philosopher Carl Popper's way around the problem of induction. For more on Popper and Falsification, please see the What is Science section.
An Enquiry Concerning Human Understanding, implied that it is natural for us to project the concept of causation onto observed interactions between two objects. We don't, however, have access to the necessary connection between the two objects that we call causation. We cannot know for sure that events even need a cause. Hume did not mean that there can be no such thing as causation, but rather that, if there is, then we do not have direct access to it. We simply observe A followed by B with such regularity that it becomes customary to claim that "A causes B". At best, "causal" relationships between events can be inferred through induction, and therefore never actually proven.
To claim True (notice the capital T) knowledge of causation is to begin down the paradoxical stairway of infinite regress. One may say that A caused B, but what really is it about A that caused B? What caused A? And then, what caused A's cause? Chicken or egg? If everything has a cause, then eventually one is tempted to give up asking. Such a paradox seems illogical and highlights Hume's point that we do not have true access to causation.
Immanuel Kant considered the problem of causation to be knowledge that we have before experience (a.k.a. a priori knowledge). He implied that causation is built into our minds as a way of knowing the world. We only have knowledge through this kind of hard-wired processing called causation.
** The concept of causation seems to be a practical one - a model of reality if you will - that allows for accurate prediction making. We cannot know causality directly. We can only assign causal relationships through induction, which means that we can never be certain. Causation suffers from the problem of induction, but then, so does science.
Sir Austin Bradford Hill proposed criteria for establishing causation in medical science:
"None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required sine qua non".
One source of confusion about science (frequently pointed out by opponents of scientific/ critical thinking) is that scientific knowledge is provisional. Because it is inductive, it is not certain. It is always subject to change in light of new information. In contrast, there are few things about which an individual can be certain.
I think, therefore I am". By this he meant that one of the only things of which one can be certain is one's own existence. However, this unique kind of knowledge is certain only for the individual that holds it. Someone may not be absolutely certain about the existence, feelings and thoughts of other individuals.
"Now 'why a thing is itself' is a meaningless inquiry (for—to give meaning to the question 'why'—the fact or the existence of the thing must already be evident..."
We can feel certain about many things, such as 'moral truths'. It is said that moral truths would remain true even if people were not around to hold them. Although there are a few moral truths that are held across most - if not all cultures - these are still values that can be debated. Like it or not, outside of an individual's own thoughts, no knowledge can can be held with 100% certainty.
One can be certain about statements that are trivially true, or true by definition. For instance, "an octagon is a polygon with eight sides". We can know this with 100% certainty. However, such a statement is really just the law of identity. We choose to name a polygon with eight sides an 'octagon'. When school children learn such a 'fact', they are really just learning the definition of a word. It is not provisional and one cannot find an exception, such as the rare, nine-sided octagon. This kind of 'fact' is not scientific. The word 'octagon' is just a linguistically simpler way of saying, "a polygon with eight sides". Another classic example is the statement, "all bachelors are unmarried men". There is no questioning "why" in an identity statement.
We can have knowledge about ideas that are derived through logic, reasoning and mathematics, provided that the starting premises are true. We can be sure that 1 + 1 = 2. We can be sure that if one begins with 5 apples in a bag and then removes 3 of them, that 2 apples remain. Deductive logic is certain, but one can argue that it stems from the law of identity (the left side of the equation = the right side of the equation). In a sense, no new knowledge is gained. As for causation, Kant classified such knowledge as "a priori" (*although he felt that math was not trivially true like the bachelor example).
Kant referred to knowledge gained through experience (and hence through observation and induction) as "a posteriori". Hume would point out that such knowledge can never be known with absolute certainty. Science provides us with a posteriori knowledge and thus scientific knowledge is always provisional, but we shall see, it is the best, if not only, way we have.
Science provides a method of determining a concept's relative truth. It uses induction to come up with new knowledge. It checks this knowledge with deductive reasoning to verify good ideas and reject bad ones. By making statements that are potentially falsifiable, science takes risks. Other "ways of knowing" do not.
Hume proposed 2 criteria for determining the knowledge value of a claim. Philosophers refer to them as "Hume's Fork".
"...let us ask,
Does it contain any abstract reasoning concerning quantity or number? No.
Does it contain any experimental reasoning concerning matter of fact and existence? No.
Commit it then to the flames: for it can contain nothing but sophistry and illusion."
(from An Enquiry Concerning Human Understanding)
Science provides us with reliable knowledge. The knowledge is inherently uncertain. However we can be reasonably certain about the inherent uncertainty. Philosophy is concerned with what we should do with our knowledge.
Philosophy is concerned with ethics. Without knowledge to inform our ethics, our ability to decide how we should live would be compromised. There have been many ways of obtaining knowledge practiced throughout history. The process of inductive reasoning for coming up with ideas, and deductive reasoning for assessing their validity is relatively new. The decision to use inductive reasoning is philosophic in the first place. We choose inductive/deductive reasoning - we choose science - because of our values.
We value consistency. We value progress. It is not surprising that we have come to hold science and its method as a foundational belief. Valuing science as our preferred "way of knowing" is a philosophical decision. For empirical matters, valuing science prohibits the valuing of other 'ways of knowing.' One cannot state that they value science for one empirical claim, but not for others. It is logically contradictory (see You Can't Have it Both Ways). Once we embrace this, we are ready to use the most successful, self-correcting and consistent method for obtaining knowledge in history.
John Byrne, M.D.
"A Treatise of Human Nature (Philosophical Classics): David Hume ..." 2006.
"Is–ought problem - Wikipedia, the free encyclopedia." 2010.
"Naturalistic fallacy - Wikipedia, the free encyclopedia." 2003.
"naturalistic fallacy (ethics) -- Britannica Online Encyclopedia." 2008.
"Talk:Bertrand Russell - Wikiquote." 2004.
"Carper's fundamental ways of knowing - Wikipedia, the free ..."
"Medical Education in the United States and Canada Bulletin ..." 2009.
"The Believing Brain: From Ghosts and Gods to ... - Amazon.com." 2012.
Klement, Kevin C. "Deductive and Inductive Arguments." Internet Encyclopedia of Philosophy (2003).
"Charles Sanders Peirce - Wikipedia, the free encyclopedia." 2004.
"Abductive reasoning - Wikipedia, the free encyclopedia." 2004.
"Deductive reasoning - Wikipedia, the free encyclopedia." 2003.
"The Problem of Induction (Stanford Encyclopedia of Philosophy)." 2006.
"Problem of induction - Wikipedia, the free encyclopedia." 2003.
"uniformity, principle of - The Worlds of David Darling." 2003.
"First principle - Wikipedia, the free encyclopedia." 2005.
"Basic belief - Wikipedia, the free encyclopedia." 2005.
"An Enquiry Concerning Human Understanding ... - Amazon.com."
"Infinite regress - Wikipedia, the free encyclopedia." 2005.