Dr. Ocumpaugh's work bridges several disciplines, merging traditionally qualitative work with machine learning techniques that facilitate scalability. These include both methods for obtaining ground truth labels for affect detection (i.e., the Baker Rodrigo Ocumpaugh Monitoring Protocol; BROMP) and a new method for interviewing students when important but rare events occur (i.e., Data Driven Classroom Interviewing; DDCI).
BROMP is a protocol for Quantitative Field Observations (QFOs) that facilitates the simultaneous use of up to three coding schemes: (1) student's behavior (e.g., on-task solitary, on-task conversation, off-task, gaming-the-system, etc.), (2) student's epistemic emotions/affect (e.g., boredom, confusion, delight, engaged concentration, frustration, etc.), and, in some cases, (3) classroom conditions (e.g., solitary work, paired work, teacher-led instruction, group work, games, etc.).
BROMP is now a well-established classroom-based observation method. It has been used to develop machine-learned detectors of students affect and behavior in numerous online-learning systems and several countries (see review in Baker et al., 2020). You can read more about it in the BROMP manual (Ocumpaugh et al., 2014), on the BROMP Wikipedia page, on the BROMP website, where you can also found out about BROMP training and the BROMPository.
BROMP is facilitated by an open-sourced app called the Human Affect Recording Tool (HART). HART facilitates both the momentary time sampling technique and the time stamps and other metadata required to build engagement detectors for online learning systems. Observers start by entering information about the fieldwork location and then enter unique student IDs that can be matched to student logfiles. They then select coding schemes, which can be customized before entering the field. Once setup as complete, students are presented back to the observer in the same order in which their IDs were entered. All data is immediately saved to protect against data loss.
You can find the HART download and installation instructions here: https://learninganalytics.upenn.edu/HART/install.html
The pre-curser to BROMP was developed by Maria Mercedes ("Didith") Rodrigo and Ryan Baker, who developed coding schemes for both the Philippines and the United States. Since then, it has been adapted for use in India, England, China, the United Arab Emirates, and Norway. This process involves establishing a new coding scheme using an interrater reliability process that includes two observers who are native to that country.
Select research publications using BROMP in other countries or contexts include:
Grawemeyer, B., Mavrikis, M., Mazziotti, C., van Leeuwen, A., & Rummel, N. (2018). The impact of affect-aware support on learning tasks that differ in their cognitive demands. In Artificial Intelligence in Education: 19th International Conference, AIED 2018, London, UK, June 27–30, 2018, Proceedings, Part II 19 (pp. 114-118). Springer International Publishing.
Hymavathy, C., Krishnamani, V. R., & Sumathi, C. P. (2014, December). Analyzing learner engagement to enhance the teaching-learning experience. In 2014 IEEE International Conference on MOOC, Innovation and Technology in Education (MITE) (pp. 67-70). IEEE.
Kirkwood, K. (2023). The pedagogical potential of digital games to enhance the learning of spelling for English second language learners in a Persian Gulf state (Doctoral dissertation, Deakin University).
BROMP has been compared to several other methods for collecting affect data, including those using sensors and those using experience sampling techniques. While sensor data can offer more continuous data streams, BROMP has an advantage in that the people who provide subsequent labels for sensor data often have less access to the contextual information in the classroom.
Research comparing BROMP to other methods includes:
Booth, B. M., Bosch, N., & D’Mello, S. K. (2023). Engagement Detection and Its Applications in Learning: A Tutorial and Selective Review. Proceedings of the IEEE.
Bosch, N., Chen, H., D'Mello, S., Baker, R., & Shute, V. (2015). Accuracy vs. availability heuristic in multimodal affect detection in the wild. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (pp. 267-274).
Bosch, N., D'mello, S. K., Ocumpaugh, J., Baker, R. S., & Shute, V. (2016). Using video to automatically detect learner affect in computer-enabled classrooms. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(2), 1-26.
Bosch, N., D’Mello, S., Baker, R., Ocumpaugh, J., & Shute, V. (2015). Temporal generalizability of face-based affect detection in noisy classroom environments. In Artificial Intelligence in Education: 17th International Conference, AIED 2015, Madrid, Spain, June 22-26, 2015. Proceedings 17 (pp. 44-53). Springer International Publishing.
DiSalvo, B., Bandaru, D., Wang, Q., Li, H., & Plötz, T. (2022). Reading the Room: Automated, Momentary Assessment of Student Engagement in the Classroom: Are We There Yet?. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6(3), 1-26.
D'Mello, S., Dieterle, E., & Duckworth, A. (2017). Advanced, analytic, automated (AAA) measurement of engagement during learning. Educational Psychologist, 52(2), 104-123.
Gasevic, D., Tsai, Y. S., Dawson, S., & Pardo, A. (2019). How do we start? An approach to learning analytics adoption in higher education. The International Journal of Information and Learning Technology, 36(4), 342-353.
Henderson, N., Emerson, A., Rowe, J., & Lester, J. (2019). Improving sensor-based affect detection with multimodal data imputation. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 669-675). IEEE.
Paquette, L., Rowe, J., Baker, R., Mott, B., Lester, J., DeFalco, J., Brawner, K., Sottilare, R. & Georgoulas, V. (2016). Sensor-Free or Sensor-Full: A Comparison of Data Modalities in Multi-Channel Affect Detection. International Educational Data Mining Society.
Zambrano, A. F., Nasiar, N., Ocumpaugh, J., Goslen, A., Zhang, J., Rowe, J., Esiason, J., Vandenberg, J., & Hutt, S. (2024). Says Who? How different ground truth measures of emotion impact student affective modeling. In Proceedings of the 17th International Conference on Educational Data Mining (pp. 211-223).
Dr. Ocumpaugh has been using her background in sociolinguistics to develop another new method for understanding students' learning experiences, Data Driven Classroom Interviews (DDCIs). This approach uses machine-learning techniques to trigger in situ interviews at moments of interest that might be otherwise difficult for a classroom observer to reliably find. You can read more about the NSF-sponsored research that helped us to initially develop DDCI here.
DDCIs are facilitated by three layers of tech: (1) machine-learned models of learning and engagement constructs, (2) a listening server with a prioritization algorithm, and (3) an Android-based app called the Quick Red Fox (QRF).
Layer 1: Detectors/Models of Learning and Engagement. Before conducting DDCIs, detectors (aka models) of relevant learning constructs have to be built for the software that is being studied. These could be complex constructs that require machine-learning algorithms to model, such as BROMP-based detectors of confusion. Or, they could be simpler constructs that require only very simple detection algorithms, such as the ones that log when a student has entered a new (virtual) space within the software.
Layer 2: Listening Server with Prioritization Algorithm. In order to trigger a DDCI, a listening server is first installed inside an online learning environment (i.e., Betty's Brain). The server "listens" to the patterns of interactions that are being logged by the learning system, matching them against detectors described above. When an event of interest is detected, the listening detector immediately puts it into a prioritization algorithm. The prioritization algorithm expedites rare and more important events (as pre-specified), pushing them to the top of the cue. It can also de-prioritizes events if the student who produced them has just been interviewed. After a specified amount of time has passed (e.g., 3 minutes), events expire and are removed from the cue.
Layer 3: The Quick Red Fox (QRF): QRF is a client-side, open-sourced app that the interviewer uses in the classroom. When the interviewer logs into QRF, the app connects with the listening server, signaling that the interviewer is ready for the first event to be triggered. In turn, the listening server pushes an event to the app, which displays student information (i.e., a user ID for the learning software) and the label for the event that was just detected. If the interviewer chooses to conduct an interview of this student, it is recorded directly within the app. Recordings are stored locally on the Android device and can be downloaded at the end of each day for further analyses.
Screenshot of the Quick Red Fox (QRF) app, which is used to trigger and record Data Driven Classroom Interviews (DDCI)
DDCIs allows us to ask students about their experience in the moment. There are other techniques, such as think-aloud protocols, that attempt to do this. Unfortunately, think-alouds require a significant amount of metacognition that may distract from the task. In some cases they may be above the students' skill levels. Young children, for example, may especially struggle to describe both what they are doing and what they are thinking, and even adults need regular reminders to keep talking throughout an extended think-aloud procedure. By targeting the specific moment of interest, DDCIs restrict the amount of time that we are increasing the cognitive load to shorter interviews (i.e. 2-5 minutes) that still capture the contextualization needed to make sense of this data.
Interviewing techniques should be tailored to the audience, using the following approach:
Axiom 1: Tell the students about the process before you conduct any interviews. You should introduce yourself to the students and make it clear that they are not going to be in trouble for anything they tell you about their learning experience. Your goal is to understand how they are learning, and anything they tell you will be used to help other kids learn better.
Axiom 2: Take a helpful but non-authoritative approach as an interviewer. This is important because students expect information and support from people with power and authority. Your goal is to get them to explain their experience, and this cannot happen if you are answering their questions. It is also important because as students get older, self-presentation effects will start to become more prominent. Students who think they are being judged will not provide you with as much information about how and why they are struggling.
Axiom 3: Obtain assent for each interview. This does not need to be a formal procedure, but ask each student if they are okay talking with you right now. It is normal for students to get frustrated during learning, and finding out why often important part of our research. However, if the student is overly upset, send a teacher to check on them, but do not proceed with an interview at that time. You are a guest in this classroom, and the students' autonomy and privacy should always be treated with respect.
Axiom 4: Approach each interview in a friendly but academically neutral manner. Once the student as assented, ask them how they are doing and use other open-ended questions (i.e., "What strategies are you using?" or "What progress have you made?") to allow them to contextualize both their feelings and the learning context. You can follow with more specific questions you might have after that, but if you start with leading questions about their performance, you'll have primed their other responses. For example, if you start by saying, "Hey, I see you just ignored the instructions from the NPC in this game. Why did you do that?," you are likely to make the student defensive. Any subsequent interview data could then be colored by this interaction.
Axiom 5: You may only get what you ask about. As much as you want to let students lead this conversations, some students will not have the metacognitive skills to bring up certain parts of their experience. A child who is distressed about their current performance, for example, may struggle to tell you about past successes in a positive fashion. If you need to know how their current situation contrasts with those, you should work to prompt them with those specific questions as the interview (or the series of interviews) goes on.