Carolyn Quam

Dr. Quam and Ms. Lauren Clough (co-author and former honors thesis student from University of Arizona) presenting on the topic “Does Talker Variability Impact Infants’ Discrimination of Easy Versus Difficult Sound Contrasts?” at the Cognitive Development Society in Portland, OR in 2017
Dr. Quam presenting on the topic “Implicit and Explicit Learning of Sound Categories by Preschoolers With and Without Specific Language Impairment” at the 2017 Symposium on Research in Child Language
Dr. Quam enjoying time with family and friends.

I am the director of the Child Language Learning Center (CLLC) and an Associate Professor in the Speech and Hearing Sciences department at Portland State University. I earned my undergraduate degree from Stanford University in 2004 and my Ph.D. from the University of Pennsylvania in 2010. I then conducted postdoctoral research at UC San Diego and the University of Arizona. I joined the faculty of PSU in 2016. More information on my work at PSU is available on my CV. Originally hailing from the Seattle area, I appreciate my newfound proximity to my hometown, and am enjoying discovering all that Portland has to offer. My family and I have been enjoying spending a lot of time at the Waterfront Park.

Research Interests

My research employs a variety of methods (including eye-tracking, infant habituation, and category-learning/cue-weighting) with infants, preschoolers, and adults. These investigations are all in the service of addressing the central questions of how children learn the sound structure of language—or multiple languages—and how this might differ across age and between typically developing children and children with language disorders. My primary goals are to establish learning mechanisms in typical language development and apply them to better understand learning difficulties in adult second-language learning and childhood language impairment. My recent R00 grant (from the NIH’s National Institute on Deafness and Other Communication Disorders) explores learning mechanisms that could explain why both adult second-language learners and preschoolers with Developmental Language Disorder struggle with language learning relative to typically developing young children. My dissertation work with Daniel Swingley at the University of Pennsylvania, and postdoctoral work with LouAnn Gerken and Sarah Creel, have led me to develop a novel perspective on typically developing language learners. Typically developing young children tend to treat a variety of dimensions as relevant to a particular language-learning task—even dimensions that are not used in their native language. While this relatively unconstrained attention to dimensions has been traditionally viewed as reflecting phonological immaturity, I argue that it may actually be a crucial aspect of young learners, in that it allows them to learn new language structure. In my R00 grant, I am applying this perspective to two groups who struggle more with language learning, putatively because they are more rigid about which dimensions they treat as relevant: adult second-language learners and children and adults with language impairment. Protracted phonological immaturity might have unexpected advantages for language learning.

Much of my work has revealed a surprisingly slow developmental trajectory for learning to interpret native-language sounds appropriately. The protracted developmental patterns I have found in several studies contrast dramatically with frequent assertions in the literature of infant precocity (see Creel & Quam, 2015, for discussion). My dissertation investigated children’s developing knowledge of how pitch contour does and does not function in English. Using an eye-tracking method, we found that, by 2.5 years of age, children knew English was not a tone language, meaning they did not treat a consistent pitch contour as a relevant dimension of a newly learned word (Quam & Swingley, 2010). It takes children longer, however, to interpret pitch when it is relevant in English. We found that it was not until four years of age that children could attribute pitch patterns to the emotions ‘happy’ vs. ‘sad,’ despite these emotions’ characteristic pitch patterns in speech (Quam & Swingley, 2012). Children took even longer to exploit pitch as a cue to the location of the stressed syllable in the words “BUnny” vs. “baNAna” (Quam & Swingley, 2014) dimensions they learn to weight heavily for word learning.

In more recent work, I have further explored infants’ willingness to attend to dimensions that are not used in their native language (Quam, Knight, & Gerken, 2017). Compared with older learners, infants are less constrained by the sound structure of their native language when they are learning to differentiate new words. As a result, they need more “help” than older word learners to zero in on phonologically contrastive dimensions (see also Gerken & Quam, 2016). One way to provide this scaffolding is to present similar-sounding words to infants in multiple voices. When multiple talkers say a word, this introduces high variability on many phonologically irrelevant dimensions, such as pitch contour. Unstructured, high variability appears to help infants to rule out these irrelevant dimensions, and identify the comparatively consistent relevant dimensions that actually differentiate words (e.g., the difference between the initial consonant of novel words like “bim” vs. “pim”). We tested this explanation by presenting infants with interestingly structured talker variability. Whereas previous work had found that unstructured talker variability helped infants to rule out dimensions that vary with talker voice—like pitch contour—structured variability should actually attract infants’ attention. This should hold even if the structured variability occurs on a non-phonological dimension, because infants are still fairly unconstrained about which acoustic dimensions are potentially relevant to word learning. We tested this question by teaching children words in the presence of a bimodal distribution of talker gender: male talkers said one word (e.g., “bewk”) and female talkers said the other word (e.g., “pewk”). Importantly, the amount of talker variability was the same as before (18 talkers overall), but it was correlated with the words. When talker gender was perfectly correlated with the words to be learned, children appeared to pay attention to this irrelevant information, and it consequently impaired their word learning—even when they were not forced to generalize beyond the trained pairings of gender and word. This suggests that at 14 months, infants are still willing to attend to talker gender as potentially relevant to word learning.

Adults learning second languages and children with Developmental Language Disorder may be more rigid learners than typically developing children.

Building on my prior work on typical development, I propose that immaturity in infants’ explicit-learning abilities causes them to rely on implicit-learning mechanisms, with which they take in the structure of the linguistic input without filtering or distorting it. In contrast, children and adults with language impairment and adult second-language (L2) learners may over-rely on explicit learning, making them more rigid learners. My long-term goal is to understand commonalities and differences between the sources of language-learning difficulties in language impairment vs. adult L2 learning, informing the development of improved language therapy and instruction.

I have recently been investigating whether adults who struggle to learn second languages are more successful at integrating multiple dimensions to learn new sound categories if they have strong procedural memory skills. I taught adults two artificial sound categories that differ on two acoustic dimensions—vowel quality (the second formant frequency) and pitch. I then evaluated their cue-weighting strategies after training to determine their reliance on the two dimensions to differentiate the categories. We are now relating adults’ learning outcomes with their memory skills. We are also planning to extend these investigations to adults with a history of developmental language disorders. We are also conducting similar experiments with preschoolers with Developmental Language Disorder (DLD), to determine whether they have particular difficulty learning phonetic categories implicitly but not explicitly. Our results thus far have revealed difficulty with both implicit and explicit learning in children with DLD, but pre-test difficulties with sound discrimination could be impacting learning outcomes in our tasks. In future work, we hope to disentangle the relative roles of learning difficulties vs. discrimination difficulties in causing language impairments.

Bilingual sound processing

Another line of my research considers language acquisition and sound processing in bilinguals. Proficient bilinguals should, in principle, process acoustic dimensions differently in each language, because the languages’ sound categories differ. Prior evidence from processing of consonants like /b/ vs. /p/ (which differ in English vs. Spanish, for example) indicates that bilinguals can match their sound processing to the language context. To my knowledge, my work is the first to extend this work to pitch/tone processing. In several eye-tracking studies, we investigated Mandarin-English bilinguals’ lexical-tone processing. In one study (Quam & Creel, 2017, PLOS-ONE), we asked whether bilingual adults would attend more to pitch/tone information when they were hearing Mandarin-like words than when they were hearing English-like words. Monolingual English speakers were included as a baseline. Even though tone contours were identical between the wordsets, bilinguals’ responses differed. For Mandarin-like words, Mandarin-English bilinguals were more efficient (measured via gaze) and more accurate at using tones to identify words than monolingual English speakers. For English-like words, bilinguals’ efficiency and accuracy did not exceed monolinguals’. This indicates that bilinguals can match their linguistic pitch processing to the language they are hearing. Another collaboration has revealed that Mandarin-English bilingual preschoolers can also process lexical tones in accordance with the language context (Singh & Quam, 2016).

While I have found evidence for language-specific tone processing in novel words, a study investigating processing of familiar Mandarin words by the same bilingual population (Quam & Creel, 2017, JSLHR) has revealed potential limitations to bilinguals’ ability to maintain separate phonetic systems to process pitch in each language. Even though all listeners had learned Mandarin from birth, we found gradient effects of dominance in English vs. Mandarin on listeners’ ability to exploit tones—but not vowels—in familiar words, despite high familiarity with the words among all listeners. Thus, lengthy experience with English weakened tone sensitivity, even for native Mandarin speakers who were using Mandarin regularly.

Photo captions: