The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.
Friedrich August von Hayek
Sometimes human scientists create theories or models that look good on paper, but are not always applicable in practice. For example, economists may gather data and create a model for economic forecasting. However, human behaviour is not always predictable and we sometimes make mistakes with our predictions or how we apply previous knowledge (from models) to new scenarios. For example, some economic crises were not widely predicted. On top of this, we may misinterpret the connection between causation and correlation.
It is easy for to fall in the trap of the "post hoc ergo propter hoc" fallacy, whereby we mistake mere correlation with causation. To illustrate this fallacy, let's look at the following example. It is a fact that towns with more churches have more prostitutes. You can come up with all sorts of causal explanations for this phenomenon: maybe there were more prostitutes to begin with, which led to the need for more churches to be built, so the "sinners" could get redemption? Or, maybe there were too many churches to begin with. The strong presence of the church might make men feel repressed, which, combined with less sexual activity due the overly pious wives, would have led to the increased presence of prostitutes? In reality, towns with more churches have more butchers, more schools, and more prostitutes... in short: more of everything. There is no causal relation.
There is only a correlation. If we confuse correlation with causation in the human sciences, we get erroneous knowledge. Richard van De Lagemaat (Theory of Knowledge for the IB Diploma, 2015) explains this through the example of the Phillips curve in economics. Economist Phillips suggested in the 1960s that there was a "stable relationship between the rate of inflation and the rate of unemployment." (Lagemaat, 2015). Some governments "concluded from this trend that they could reduce unemployment by allowing inflation to rise" (Lagemaat, 2015). However, this did not work and "many countries ended up with both rising inflation and rising unemployment' (Lagemaat, 2015). Although the Phillips curve showed a correlation between the two aspects, there was not necessarily a relationship of causation.
Theories in the human sciences are often good at explaining things as they happen. We can sometimes test the validity of these theories, either through a controlled environment or in a real world setting. However, such experiments are not foolproof and cannot take all variables into account. These theories do not guarantee that the outcomes will be the same for similar scenarios in the real world. When it comes to laws in the human sciences, we should be particularly careful with the predictive power of these laws. Perhaps it is best to speak of trends rather than laws.
As seen previously, it is not always easy to get information about human behaviour through experimentation. This is an issue across the disciplines within the human sciences. Sometimes it is not possible to do experiments, and sometimes the sample we can gather through experimentation is simply too small. One way around this, is the use of questionnaires and polls. This type of data collection allows us to reach a wider audience. But questionnaires are not always reliable for a multitude of reasons.
Firstly, the questionnaires still target a fairly small segment of society, i.e. the people who have received your questionnaires and bothered completing them. Teachers who complete a master in education, for example, will often gather data for their research within their own schools, and even within these schools only a certain type of teacher or student will bother completing the questionnaires. In this respect, there may be selection bias. Secondly, people do not always respond to questionnaires truthfully. For example, people often like to boast about and exaggerate the regularity of their sexual performances or minimise bad habits such as alcohol or tobacco use. We are not always honest with ourselves and responses in questionnaires reflect this. Going back to the Milgram experiment, it is doubtful whether participants would have answered "yes" in a poll asking whether they would have delivered electric shocks if learners got the answer of the memory test wrong. We also tend to overestimate our strengths and underestimate our weaknesses. In short, we seem quite good at deluding ourselves. For example, research shows that we generally think that we are better looking than we are (the bad news), but we don't always realise this (the good news, if you can call it that way). On top of all this "delusion and dishonesty'" some people also like to figure out what the purpose of the questionnaire is, and then shape their answers to suit this purpose (even though they might not consciously be aware of this fact). Thirdly, the language in questionnaires may be misleading and questions could be loaded in nature. Good human scientists avoid this, but it can be very difficult to compose both truly neutral and encompassing questions for questionnaires. Multiple choice questionnaires may not include room for your particular answer (that you would like to give), which, again may lead to inaccurate data selection.
ACTIVITY: You are a human scientist tasked with gaining knowledge about the behaviour of students and you will be able to use students in your TOK class to conduct your research. Your teacher will choose which aspect of human behaviour you should focus on. This could be related to group behaviours, decision making, market preferences, happiness or well-being (in times of Covid-19), peer pressure, learning habits etc.
How could you measure the above?
If you were to compose a questionnaire, what kinds of questions could you ask?
What kinds of generalisations could you make based on the (small) sample size?
How might you use statistics to reveal knowledge?
Follow-up discussion:
How might the language used in polls and questionnaires to gather information influence the conclusions that are reached?
How does the use of numbers, statistics, graphs and other quantitative instruments affect the way knowledge in the human sciences is valued?
Are observation and experimentation the only two ways in which human scientists produce knowledge?
Scope
How do we decide whether a particular discipline should be regarded as a human science?
Do the human sciences and literature provide different types of knowledge about human existence and behaviour?
Are predictions in the human sciences inevitably unreliable?
What role does mathematics play within the human sciences?
What kinds of explanations do the human sciences offer?
What are the main difficulties that human scientists encounter when trying to provide explanations of human behaviour?
What constitutes “good evidence” in the human sciences?
Is human behaviour too unpredictable to study scientifically?
Does “big data” make the human sciences more “scientific” as an area of knowledge?
Perspectives
To what extent is it legitimate for a researcher to draw on their own experiences as evidence in their investigations in the human sciences?
Is it possible to eliminate the effect of the observer in the pursuit of knowledge in the human sciences?
To what extent are personal factors such as gender and age important in the human sciences?
How might the emotions of the investigator affect the result of an investigation in the human sciences?
In what ways might the beliefs and interests of human scientists influence their conclusions?
Methods and tools
What role do models play in the acquisition of knowledge in the human sciences?
Are observation and experimentation the only two ways in which human scientists produce knowledge?
What assumptions underlie the methods used in the human sciences?
Are the methods used to gain knowledge in the human sciences “scientific”?
How can we know when we have sufficient evidence to accept a claim in the human sciences?
How might the language used in polls and questionnaires to gather information influence the conclusions that are reached
What kinds of explanations do the human sciences offer, and how do these explanations compare with those in other areas of knowledge?
How does the use of numbers, statistics, graphs and other quantitative instruments affect the way knowledge in the human sciences is valued?
Is “big data” changing the methodologies of the human sciences?
Ethics
To what extent are the methods used in the human sciences limited by the ethical considerations involved in studying human beings?
Do researchers have different ethical responsibilities when they are working with human subjects compared to when they are working with animals?
What are the moral implications of possessing knowledge about human behaviour?
Should key events in the historical development of the human sciences always be judged by the standards of their time?
What values determine what counts as legitimate inquiry in the human sciences?