Academic research
How does knowing and speaking two languages (bilingualism) contribute to variability in spoken language comprehension and production?
How do individual experience (e.g. language proficiency, use, and history) and social factors (e.g. individual traits, attitudes, identity, conversational roles) shape language processing behavior?
Academic CV (last updated in 2020)
Recognizing spoken code-switched words
How do bilingual listeners manage switching between two languages?
Does processing spoken code-switched words take longer than processing unilingual (single language) words? I.e., is there a switch cost in auditory language processing?
Mandarin-English code-switching is not necessarily costly. For bilingual speakers in the California Bay Area, infrequent insertions, such as Mandarin words in English sentences, can be costly. But the costliness of frequent code-switches, such as English words in Mandarin sentences, depends on the listener's dominant language (Figure A, left).
LSA 2020 Talk
(Susanne Gahl, Keith Johnson, and I have also shown that phonetic patterns preceding a code-switched word can help listeners anticipate Mandarin-English code-switching. (See our paper and past presentations below.)
Pronunciation of code-switched speech
What are the phonetic consequences of code-switching?
Unilingual Mandarin utterances are characterized by pitch patterns that depend on the tone of the following syllable. Mandarin-English code-switched utterances have similar pitch patterns as unilingual Mandarin utterances: the same tone-specific pitch trajectories occur before Mandarin code-switched words, although the pitch of the switched word itself may be reduced (Figure B, right).
PhREND 2019 Poster
Acquiring second language (L2) sounds
How effective are different training methods for learning new sound contrasts?
This collaboration examined how different phonetic training methods improved monolingual English speakers' productions of a Marathi sound contrast. Ultrasound training, in which the speaker was able to receive visual feedback of their articulations, resulted in more improvement than training using static mid-sagittal diagrams of the vocal tract.
Other projects
Power and phonetic accommodation (LabPhon 2018)
Ultrasounding Tswefap back consonants (PhonLab 2016 Report)
Is perception personal? (ICPhS 2015 paper)
Phonotactic probability (UCB SROP 2013 paper)