To know a language is to use one’s past linguistic experience to form expectations about future linguistic experience. This process is mediated by both speakers’ stored representations of their previous experience, and the online procedures used to process new stimuli in light of those representations. My research thus asks what the form of these representations is, and how the language processing system integrates these stored representations with incoming stimuli to form online expectations during language comprehension. For example, when one encounters a highly frequent phrase such as “bread and butter”, is this phrase represented and processed holistically as a single unit, or compositionally as a conjunction of nouns? Is the form of this representation influenced by the frequency of the expression (compared to a less frequent expression like “facts and techniques”) or its frozenness in a given order (compared to a more flexible expression like “boys and girls”/”girls and boys”)? To answer these questions, I combine experimental psycho- and neurolinguistic methods, such as eye-tracking and ERPs, with probabilistic computational modeling.
I also ask comparable questions in the domain of music: how is our previous musical experience represented and processed to form expectations for future experience? For example, to what extent does processing of melodies rely upon language-like hierarchical structure versus surface statistics (e.g. note to note transition probabilities)?
2019 LSA Institute
I am teaching Computational Psycholinguistics at the 2019 LSA Linguistics Institute at UC Davis. We're thrilled to welcome linguists from around the world to Davis this summer!
Emily @ CUNY 2017
You can watch my keynote talk from the 2017 CUNY Conference on Human Sentence Processing on YouTube.