Research

This page highlights select research projects I'm involved with. See my publications and CV for the full scope of the research and scholarship I do.

Title: Stakeholder Perceptions of Duolingo English Test Speaking Performances

This project sought evidence pertaining to the relevance of the DET in academic settings by examining the perception of four groups of university-based listeners, including undergraduates, graduate students, university staff, and faculty members. We asked them to rate the comprehensibility and academic acceptability of DET spoken performances, which are elicited with low-context prompts and scored by machine in operational settings. Encouragingly, we found that listener judgments of comprehensibility and acceptability were highly correlated, and that judgments of acceptability were more generous than comprehensibility - indicating that listeners are willing to tolerate speech that is occasionally difficult to understand in academic settings. Additionally, we found strong correlations between these listener perceptions and DET Overall and Conversation scores, providing some support for the relevance of the test to academic settings and suggesting that the Conversation subscore might be usefully used as part of admissions decisions. We have also examined the impact of speech stream characteristics (e.g., phonological accuracy, rate of speech) on listener judgments, finding similar influences on comprehensibility and acceptability.

Title: Diagnosing Second Language Pronunciation

Bringing together many of my interests, this project explored the potential of diagnostic language assessment (DLA) for informing L2 pronunciation teaching and learning. With Korean as a target language, I developed an instrument I call the Korean Pronunciation Diagnostic, which pinpoints segmental pronunciation difficulties through two production and two perception tasks. Score reports reveal major difficulties learners have with producing Korean segments, and provide additional insights related to whether or not a learner can accurately distinguish difficult segments from other segments in listening. The idea is to help teachers and learners to get down to the specifics of pronunciation difficulties, providing concrete targets for subsequent teaching and learning activity.

This research has two components: (1) large(ish)-scale field testing of the KPD alongside a self-assessment, an oral proficiency measure, and a spontaneous speaking task, and (2) interviews with a subset of learners and their teacher(s) to explore how they understand and act (or not) on KPD results. Data from these two components were analyzed to inform a validity argument for the use of the KPD to support pronunciation learning.

I carried out this research in South Korea, thanks to the generous support of the Fulbright program, ETS, and MSU.

Title:  Korean Pronunciation Instruction

This project has looked at how effective a classroom-based pronunciation instruction treatment is for beginner to low-intermediate students of Korean as a foreign language. Participants completed two speaking tasks (a picture description task featuring some linguistic input, and a paragraph-length read-aloud) 11 weeks apart. Half of the students received 8 hours of extracurricular classroom-based pronunciation instruction that integrated technology with teacher-fronted explanation and common communicative group activities, while the other half pursued alternative non-pronunciation focused extracurricular activities. The speaking tasks were rated by 10 native speakers of Korean for comprehensibility and accentedness. Additionally, these tasks are being phonemically transcribed and analyzed for in terms of individual phonological features. 

This study adds to the field of L2 pronunciation instruction by looking at a less commonly taught and researched language (Korean) and doing so in the context of an extended classroom treatment. One angle I'm particularly interested in looking at with this project is how sensitive global (listener ratings of comprehensibility and accentedness) and local (e.g., individual phonological features) measures are for examining instructed L2 phonological development.

Title:  Pronunciation across Modalities in Task-Based Learner Interaction

Breaking off a subset of data from a larger project that Shawn headed up (see his chapter with Dominik Wolff in the edited volume Peer Interaction and Second Language Learning), we looked at how often pronunciation was a focus of language-related episodes in learner-learner interaction. Interactions involved three tasks (spot the differences, agreement on a scholarship candidate, and conversation) that took place in one of two modalities: face to face or audio-only synchronous computer mediated communication (i.e., voice-only Skype). For pronunciation LREs, we also looked at the type of phonological feature which triggered the episode- segmental or suprasegmental.

Title:  Captions for Video Comprehension

This project is the fourth iteration of Paula and Sue's work with captions for L2 video comprehension. In this round, we looked at how L2 English speakers used captions, and how this caption use interacted with their level of comprehension and verbal working memory. To examine how the participants used captions, we used an eye-tracker (EyeLink 1000), a machine which can pinpoint where one is looking on a screen. Paula and Sue reported on this data at TESOL 2016.

Title:  The Duolingo Challenge

I hope I didn't forget any collaborators on this one! We all undertook the challenge of learning Turkish (a language none of us knew) exclusively through Duolingo, a popular online language learning program. We agreed to spend 34 hours studying (an amount of time Duolingo claims is equivalent to 1 semester of a university foreign language class) and then took a Turkish 101 exam. In addition to a "big picture" presentation and write-up, a number of smaller projects are coming out of this challenge as well. With Hima, Rachelle, and Shawn, I'm working on a narrative analysis of the experience of learning Turkish on Duolingo, coming from an ecological perspective.

Title:  Learning Korean Informally Online

For this project, I wanted to examine the language learning practices of a community of Korean learners on Reddit, a popular link-sharing and discussion website. Being a member of the community myself, I had noticed that the way people went about learning Korean and the tools they used differed from many accounts of online informal language learning I've read about, such as people participating in video gaming communities or fandom sites. What I found was a well-organized community that focused on explicit learning of Korean linguistic features more than using the language to communicate, though it must be acknowledged that community members did not restrict their learning activities solely to Reddit. 

Title:  Academic Definitions Test

Most direct vocabulary testing focuses on existing vocabulary knowledge, and most indirect vocabulary testing looks at lexical diversity in spoken/written output. Motivated by the demands of reading for academic purposes, this project attempts to examine vocabulary from a different angle, focusing on how well readers are able to connect academic/technical vocabulary with definitions provided in-text. The Academic Definitions Test (ADT) uses a minimally-edited textbook excerpt that features a number of in-text definitions.  Technical vocabulary are swapped out for pseudowords, and test takers must provide appropriate definitions.

The first iteration of this project employed a short-answer format for responses.  From these responses, themes emerged from the incorrect answers that facilitated the creation of a multiple-choice form of the test for a second iteration.  Generally, the test is reliable and some evidence exists for its validity.  The multiple-choice version was more practical, less reliable, and slightly easier, especially so for one of the L1 subgroups in the sample.

Title:  Equating for Small Language Testing Programs

Testers (of all kinds) use multiple forms as a means of maintaining test security and avoiding retest effects.  In large-scale testing programs (e.g., TOEFL, SAT), equating is a widely-used statistical technique used to account for the difference in difficulty among forms, and furthermore there exist secondary analyses that can examine the quality of an equating relationship.  In smaller testing programs, such as those in intensive language programs, equating is a potentially beneficial technique, but sample sizes are a concern.  This project explores the suitability of several equating methods in a small language testing program based on an investigation of two parallel forms taken by real test takers (small-scale equating research often relies on simulated or subsample data).