Interview research has some similarities and some differences from survey research. At the core, you are still assessing the views of a population, and you need to pay close attention to how you sample and design your questions. But interviews have different strengths and weaknesses than surveys. You have greater flexibility in the questions you ask, but more biased samples. This video will review those strengths and weaknesses and discuss some methodological issues related to interviews, including the ethics of research on people, bias introduced by interviewers themselves, and reporting results.
We tend to choose interviews as our research method when we are seeking information about a difficult-to-survey group. For example, on one end of the power spectrum, elites (such as politicians or business leaders) rarely fill out surveys. On the other end of the spectrum, vulnerable or abused populations may not have access to survey technology or may need special care when being asked sensitive questions. We also conduct interviews when we are conducting exploratory research – interviews allow us to ask broad, open-ended questions, while surveys limit responses.
Interviews, therefore, have advantages over surveys in their internal validity. Open-ended questions often have less measurement bias than closed-ended questions because you don’t have respondents select from a few preset answers. You can allow respondents to define concepts in their own terms. This can often help you identify the processes that you are studying or new areas for research you hadn’t thought of.
But interviews share the potential sources of bias we see in survey research. Open-ended questions may elicit more detailed responses, but that doesn’t mean they will be more truthful. In fact, people often feel more pressure to hide the truth in face-to-face contexts as opposed to more anonymous surveys. And because interview research involves smaller sample sizes, opportunities for selection bias are greater.
Let’s talk a little about how to conduct interviews. I’ll start with a discussion of selecting interviewees, then move on to question design, informed consent, and reporting results.
While you can’t conduct interviews with as many people as you can survey, you still need to think about your sampling strategy. You can identify your population of interest, develop a sampling frame, and then decide how you will sample from that list.
For example, in my research, I study why politicians in authoritarian states join political parties – in particular why they join loyal opposition parties. As part of this, I went to Armenia and the Kyrgyz Republic and interviewed politicians. My population of interest was political elites – people who would consider joining a political party. For my sampling frame, I chose the list of candidates who ran in the most recent legislative elections – and who I could find contact information for. I then sought to interview at least one person from every party or faction that ran.
This strategy introduced bias at several points. While I was interested in studying any elite who might join a party, my sampling frame was people who did choose to engage in the political system. While not everyone who is on an electoral list is actually a member of that party, I still selected for people who chose to participate, not elites who decided it was the wrong choice for them, biasing my results in favor of joining a loyal opposition. Similarly, the practical need to find contact information biased my sample toward more organized, younger parties. It was hard to find information on Communist Party members in Armenia, for example, because most of them were in their 70s or older -- few of them had mobile phones or internet access. And it was impossible to find contact information for parties that did not have central offices or keep contact information for their list members (which is more common than you might think). These are acceptable sources of bias, both for practical reasons (there aren’t many ways to get lists of political elites) and because the organized loyal opposition was the group I was most interested in getting information on. But I have to acknowledge the limitations of this research as well – there are limits to the overall conclusions I can make about joining parties.
Sampling also introduces bias. It is nice if you can conduct a simple random sample or stratified sample, but often with interviews you have to use convenience sampling – you end up speaking with the people who are available to you. I dealt with this by incorporating elements of stratification – I didn’t stop interviewing until I had some kind of balance in the sample. But another common tactic is to do what is called snowball sampling. This is when you ask each person you interview to recommend 1-3 more people to talk with. This can imbalance your sample – you might get more interviews from one group than another – but is useful because it provides insight into social networks. By learning who recommends talking with whom, you can get a sense of how people relate to each other in the organization you are studying.
For the interviews themselves, you have to consider all the factors related to question design we discussed in class, but also how structured you want the interviews to be. The advantage of interview research is that you can ask open ended questions – questions like, “why did you join this political party” – without presuming what the respondent’s answers will be. It is great for exploring new topics and identifying new areas for research. But you do need to consider how flexible you want to be. Unstructured interviews allow the conversation to be directed by the respondent – it flows naturally to the topics that come up during the interview itself. Again, this is great for exploration, but makes it hard to compare interviews or responses to one another. Structured interviews involve asking all subjects the same set of open-ended questions, allowing you to see how different people responded to the same questions.
Another key element of an interview is fairly obvious – you are talking to someone directly. This adds additional layers of bias based on what are called “interviewer effects.” People react – and often respond – differently based on the gender and race of the person interviewing. Interviewers need to think about how they want to present themselves and other practical choices like whether they want to use a translator (if needed). And interviews require greater attention to informed consent. As we’ll discuss when we get to the ethics unit of the class, any research involving people can only be done with those individuals’ consent. There is rarely any risk involved in answering a survey, but interviews might bring consequences upon the people a researcher talks to. In a dictatorship, respondents discussing sensitive issues might be questioned later by the secret police. Or if you are talking to victims of war, the process of interviewing itself might reopen traumatic memories for the respondents. Every interview needs to begin by obtaining informed consent from your respondents – they have to agree to participate in your research after you have explained what it is — and any risks – to them in a way they can understand.
I will conclude this video with how we record information from interviews. Qualitative data (like interview transcripts) is difficult to replicate. If someone else were to try and repeat the interviews I did, they would get different results. There are many reasons for this – time has passed since I did my interviews, and the political situation in both countries has changed a lot. So the context is different. But it’s also true that interviewees will respond differently to different people. So replication – one of the great strengths of statistical research -- is almost impossible in much qualitative research. This will limit the external validity of my research, but we can make up for this a little by strengthening internal validity. Specifically, by being transparent in how we report information from interviews. Best practice is to publish information on the context of interviews when we publish our results – and, if possible, provide de-personalized information on the content of the interviews themselves. For example, this sample describes the party of an interviewee, when and where the meeting took place, and some context like length and availability of transcript. This allows readers to judge for themselves the honesty and value of the interviews.
To summarize, interviews allow you to go more in depth than surveys and reach inaccessible populations. You need to approach participant selection like a survey – identifying a population and sampling frame. And you need to account for the bias introduced by your sampling strategy and interviewer effects.