Now that I've finished coding, it feels right that I carry out some follow-up interviews with my participants to add weight to the theories that have emerged. I want to ask them a few questions, the answers to which should help to substantiate these theories. To get the ball rolling, I decided to write a bank of questions that linked explicitly to each of the 4 unintended consequences I've discovered:
Playing OWRPGs is a common strategy to manage mood, mental, and physical health
(Linked to this) Recognising and managing neurodiversity through play is common
Accidental learning happens when HE students play OWRPGs
Gaming can impact negatively on physical and mental health
My Supervisors took a look at the questions and agreed that they were fit for purpose, so I powered up NVivo, went through my codes to see who was best placed to answer questions based on their interview responses, and started to allocate pertinent questions to relevant participants. This took a whole day of mapping codes to potential participants, copying and pasting questions from the bank for each, and then realising that it would be a much better idea to ask all participants all questions, not in face-to-face interviews, but via an online questionnaire. This is because:
It's been a year since I interviewed my participants, so it's likely that some of them have left university or no longer use the email addresses they provided me with.
That being the case, if I decide to re-interview a select number of participants only, there's every chance responses will be few and far between.
If people do agree to be interviewed again, I can't offer them any payment, which could be a deal-breaker for some, leading to a further reduction in the number of possible responses.
It made sense to reframe some of my questions due to their potential sensitivity, and to make it clear from the start that all questions were optional, and that the questionnaire was anonymous, so I wouldn't know who had completed it. It also saved time; organising another round of interviews then going through the painstaking process of transcribing each recording before coding is a long process.
Another consideration is that if the questionnaire is completed asynchronously and anonymously, responses could be more authentic; being interviewed on camera could mean participants gauge their responses, don't feel that they can open up, and could put pressure on them - especially as the questions could be a little sensitive.
I also wanted to start the questionnaire with a gender identification question as I was also curious to see whether gender had an influence on the questions answered, and the nature of the responses.
The next step is to send a link to the questionnaire to all 35 participants (both interviewees and journal-keepers), making it clear that completion of any or all of the questions is optional, highlighting that some questions won't be relevant, or can be skipped if they feel too sensitive, that all responses are totally anonymous BUT I cannot afford to pay anyone for their time.
I may not receive any responses, and that's fine; I've finished data analysis, I have three positive and one negative unintended consequence, and I'm happy with what I've got - this would be the data analysis cherry on top of my research cake. However, further data that adds weight to my 'arguments' may also be generated from any responses I do recieve, and that could be rather exciting.