Research

The lab is broadly interested in investigating how real-time processing of language turns into cognitive representations over time (learning), how learned representations affect real-time processing, and what mechanisms can characterize these complex processes. People use very simple streams of information to understand meaning such as when words co-occur (e.g. "dog" is often heard closer in time to the word "cat", so they may become associated over time). We also use various "high-level" contexts such as communicative goals, social constraints, and overt behavior from the self and others. How do we integrate such different kinds of information to build meaning in the moment?

How much is salience and how much is language?

We're using language as a context to mediate visual salience. It's well established that viewing behavior of natural visual scenes incorporates both low-level visual characteristics such as orientation and contrast and category-level information, and differences in eye movements based on the task. But in this measurement, with different stimuli, particularly realistic complex visual stimuli, there is not a current way to examine how much the salience contributes to results. We have developed a computational tool to measure quantitatively this difference, and I would be happy to share this by request (Huette, forthcoming).

The context of visuospatial cues on categorizing prepositions

In this project led by graduated Master's student Ariel Mathis, we looked at the dominance of pre-learned prepositions native to the English language (on/off) as well as non-native distinctions made in other languages (horizontally on top of/vertically on top of, used in German). Using a statistical learning paradigm, participants got either "messy" input with the non-words labeling pictures with less accuracy, while in another condition participants got a perfect training, where stimuli were labeled with one non-word only.