To start putting methodological theory into practice I interviewed four of my friends, one work colleague, and my supervisor’s partner(!) as part of a pilot interview activity. It makes sense to practise all that interviewing theory on some willing folk, and while I’ve interviewed people before in my official role of Learning Technology Manager (job interviews), and in my unofficial role as a podcaster (think Alan Partridge) I have never carried out interviews based around Constructivist Grounded Theory methods, so tried to leave my preconceived notions – and hopefully bad habits – at the door.
Four interviews were carried out face to face, and two via Zoom, with both methods highlighting their own advantages and disadvantages. An online interview is automatically transcribed and can be replayed as often as required after the event. However, it does lack the closeness or warmth of being in the same physical space. Meanwhile, face to face interviews can’t be replayed, but may lead to more natural conversation. Transcribing them is a slightly more time-consuming process too, as audio recording of the interview need to be uploaded to Panopto in order to generate a transcript, something done automatically in Zoom.
These pilot interviews with friends and colleagues lasted, on average, for 90 minutes or more and veered too often into the realms of chatting about games…or films, books or television shows. This meant that transcribing them and cutting out the ‘detritus’ took a long time. I’m looking to recruit a healthy number of participants at the start of 2025 (25 sounds reasonable), and if each interview lasts for over an hour, I'll have generated a lot of work linked to editing transcriptions for accuracy, let alone initial coding. I need to find the sweet spot for the length of an interview that is somewhere between being brief enough to be manageable from a workload standpoint, but not so brief as to not be of any use. Instinct tells me that 30-45 minutes for each interview seems appropriate and manageable, but I must not let the conversation wander…unless the direction in which it’s wandering could prove fruitful.
One thing I noticed across all interviews was the mood of my participants. They were animated, they smiled and laughed, and they were all refreshingly open and honest about their habits, behaviours and attitudes around gaming. This in part can be put down to the fact that I knew most of my participants (one of them for almost 20 years, another or over 30), so there was an established relationship between us. However, when interviewing the one volunteer who I had never met before, there was still much in the way of open and positive body language, maintained eye contact, enthusiastic gesticulation, smiling and laughing, and a real honesty when answering my questions. I wonder whether this will be the case when I come to interviewing a whole cohort of people I have never met?
I'll be honest here. It's hard to manage everything that is going on in my head AND in the interview, and I hope the constant barrage of questions I'm asking myself starts to recede as I become more familiar with the practices I'm...practising. Do I look like I'm listening? Is my body language mirroring the participant so I look friendly and interested? Do I need to prompt them? What do I say as a prompt anyway?! Are my questions relevant? Does this conversational diversion need to be investigated further or is it going to knock us off track? Learning to juggle my inner thoughts and management of the conversation via discreet glances at the time, and stealthily scribbling memos without losing eye contact is hard, but easier to manage in online interviews. My handwriting is appalling at the best of times, but if I’m writing without looking at what I’m doing, it’s virtually illegible. I need to work on using upper case and slowing down a bit to aid readability.
I also need to learn that, upsetting as it is to realise it, it’s not all about me(!) I'm quite hyperactive by nature (something else I need to be mindful of when interviewing), so I do get excited when my interviewee reminds me of a really great game, and I end up sharing my experiences with them as much as they do with me. I’m really not a self-absorbed narcissist, but this is something I do need to remain mindful of. A lot of the detritus I mentioned earlier that needed to be cut from transcripts was just me prattling on about how brilliant Starfield is for ages.
One very useful process was receiving feedback from participants immediately after the interviews. I’d decided to add the same question at the end of all interviews: “have you experienced any unintended consequences when playing you’d be happy to share with me?” This was met either with a drawn-out silence and the need to prompt the participant for an answer, or, if there was a response, it was to mention negative unintended consequences only, because of the unconscious link between the phrase 'unintended consequences' and 'bad things happening'. Interviewees also told me that they felt that they’d been put on the spot when asked the question, and I admit that it did make the end of the interviews feel sort of…flat. I decided to drop this question, instead using prompts like “was that expected?” and “was that your intention at the time?” if required during the interview.
In reflection, this has been both a practical way to learn how to organise and manage the interview process, make some adjustments to questions, methods, and behaviours before the interviews-proper, and have participants as critical friends. It’s also made me a little more confident, and ready to send out my Call for Participants request and questionnaire in January 2025. I know that I still don’t really know what I’m doing, but every interview will be part of a learning process for me, and I'm hoping my confidence grows as I progress. I know I will make errors in organisation, delivery, and interpreting the emerging data. I’m sure I will wish that I had managed certain interviews better, but I will refine and improve the process with every interview. A lot of interesting data was generated in these 6 pilot interviews alone, which is heartening, as it indicated that an improved process with a larger cohort of interviewees will generate a larger amount of even more interesting data.
I hope…