The laws of thought are the iron rules that apply to any rational argument. They are rules that apply, without exception, to any subject matter of thought. The three laws are as follows:
The Law of Identity
The Law of Excluded Middle
The Law of Non-contradiction
Law of Identity The law of identity simply states that each entity or thing is identical to itself.
Law of Non-contradiction. The law of non-contradiction is given by Aristotle. Among his other assertions, he contends that when trying to determine the nature of reality, the following principle applies. A substance cannot have a quality and yet simultaneously not have that same quality.
Law of Excluded Middle. The fundamental law of excluded middle, which originated with Plato, states that for any proposition, that proposition is either true or its negation is true. Such a statement is an essential component of Boolean algebra and can be written as the classic exclusive OR: A = (M ∨ ~M). A is equal to M or not M. The symbol ∨ is OR and the symbol ~ is the term “not.”
The truth table on the right shows the values for if ... then statements. Things become a lot more complicated when we move from the combination of statements through operators like "and" and "or" to hypothetical statements and their logic. These if....then statements are more complicated than we think from the outset.
For instance: when P is false, the implication will always be true, no matter what the value of Q is. Once P is false, "P implies Q" is always true. The value of P is sufficient to determine the truth, and Q is only a necessary condition for the value of the overall statement.
If you look at the truth table above, an "if P, then Q" statement can be true, even when both p and q are false. It’s called “vacuously true”, which means it is true because it is empty, or it is true because there is no way for it to be false. Nevertheless, for common-sense thinking this is a counterintuitive logical conclusion. It occurs because the "If p, then q" statement is only specifying the conditions under which P implies Q, not vice versa. It does not necessarily mean that q must be true if p is true.
Some examples to illustrate this:
If it is raining (P), then the grass is wet (Q).
If it's not raining (P is false) and the grass is not wet (Q is false), the statement "If it's raining, then the grass is wet" still holds true as a logical conditional.
If Alex studied for the test (P), then he will ace it (Q).
If Alex didn't study (P is false) and he doesn't ace the test (Q is false), it doesn't invalidate the original conditional statement.
If it's the weekend (P), then I will relax (Q).
If it's not the weekend (P is false) and I don't relax (Q is false), the original statement "If it's the weekend, then I will relax" remains logically valid.
We also need to remember that the original conditional "If P, then Q" does not assert any causal relationship from Q to P, only the implication from P to Q. If Q is false, it does not invalidate the original logic if P is also false.
What happens if we introduce negative statements as well? Consider the distinction between "modus ponens" and "modus tollens," two types of logical arguments in deductive reasoning. The main difference is:
Modus Ponens: This is an "affirming" argument that deduces Q is true if P is true. The logical form is:
P -> Q
P
Therefore, Q
For example: 1. If it is raining outside, then the grass is wet. (P -> Q)
2. It is raining outside. (P)
3. Therefore, the grass is wet. (Q)
Modus Tollens: This is a "denying" argument that deduces P is false if Q is false. The logical form is:
P -> Q
Not Q
Therefore, not P
For example: 1. If Alex is at school, then he is on campus. (P -> Q)
2. Alex is not on campus. (Not Q)
3. Therefore, Alex is not at school. (Not P)
In summary:
Modus ponens affirms Q given P is true.
Modus tollens denies P given Q is false.
These are related patterns of logical deductions, using conditionals.
It appears that belief-based reasoning is the norm with human beings, regardless of ability. Some people can, when required, set aside their beliefs and reason in an abstract logical manner. We know this must be true, because people can become, for example, highly skilled mathematicians. But such logical reasoning is not as normal and natural as many philosophers and psychologists had at one time assumed. It seems to require a combination of high IQ and a good deal of conscious effort to achieve. It may also require extensive training, as in the case of mathematics.
We can demonstrate the power of belief over logic by introducing with more realistic content into conditional statements of the form ‘if … then … ’. We discover that people are influenced by whether or not they believe the conclusion, notwithstanding the logical form that created it. Suppose we give the following modus ponens argument:
If nurses’ pay is increased then recruitment to the profession will fall.
Nurses’ pay is increased
Therefore, recruitment to the profession will fall
Even though the logic is correct, many people do not believe the conclusion to be true, because it is counter-intuitive: increasing pay should increase recruitment, right? If university students are given a problem like this, many will refuse to endorse the correct conclusion. It appears that their actual beliefs block the inference. This effect weakens in students with higher IQ values, so there is some hope on the horizon. But participants with lower abilities still mostly stick with what they believe, rather than adhere to logic.
Syllogisms often relate three terms in two premises. The first premise links A to B and the second B to C. The conclusion, which may or may not follow, links A to C. They can be presented in various forms. Here is an easy one:
1)All A are B 2)All B are C 3)Therefore, all A are C
This syllogism is valid and everyone can see that—as easy as modus ponens. But suppose it is worded as follows:
1)All A are B 2)All C are B 3)Therefore, all C are A
This argument is not valid, but many university students nevertheless say that it is. THis can easily be demonstrated by using examples:
1) All dogs are animals 2) All cats are animals 3)Therefore, all cats are dogs
Needless to say, no one would agree that this is a valid argument but its form is exactly the same as the A, B, C version. It is the form that defines validity, not the content. This shows again how difficult most people find it to reason in an abstract logical manner. Using our beliefs does not improve our reasoning. In fact, it can be a major source of bias. Here is quite a tricky syllogism in abstract form. Try to decide whether or not it is logically valid:
1) No A are B 2)Some C are B 3)Therefore, some A are not C
Difficult, is it not? Here is a version with the same form but with realistic terms:
No addictive things are inexpensive
Some cigarettes are inexpensive
Therefore, some addictive things are not cigarettes
In one frequently cited study, 71 per cent of university students tested said that this was a valid argument. They were also given the following syllogism to evaluate:
No millionaires are hard workers
Some rich people are hard workers
Therefore, some millionaires are not rich people
In this case, only 10 per cent of the students tested said that the conclusion followed logically. But if you look closely, you can see that both of these realistic versions have exactly the same logical form. All we have done is substitute different realistic terms for A, B, and C. In logic, the actual meaning of the three terms is irrelevant.
This form of syllogism is not, in fact, a valid argument. (However, if the conclusion were ‘Some C are not A’, it would follow.) So why do so many people mistakenly say that the conclusion ‘some addictive things are not cigarettes’ follows, but correctly reject the conclusion ‘some millionaires are not rich people’? The answer has been established beyond any reasonable doubt by many psychological experiments. The first conclusion is believable and the second unbelievable. Even with clear logical reasoning instructions, people are strongly inclined to say that believable conclusions follow and unbelievable ones do not. This is known as the belief bias effect.