Research

Current Research Projects

If You See Something, Say Something

with Taylor R. Hayes, John Henderson, and Fernanda Ferreira

Speakers often use language to describe the world around them, yet very little is known about the interaction of these systems when describing real-world environments. To talk about a scene, for example, speakers must convert a static image into a dynamic description of that image using language (the linearization problem; Levelt, 1993). It is possible that the linguistic system takes advantage of visual attention to solve this problem. An innovative approach to scene processing which precisely quantifies visual salience and meaning has demonstrated that attention in scenes is more strongly controlled by semantics than visual salience, as revealed by eye movements recorded during two offline judgment tasks (Henderson & Hayes, 2017). The goal of this project is to understand the relationship between scene level information available to the visual system (visual salience, meaning) and speech.

In this study, we applied the Henderson & Hayes (2017) paradigm to investigate seeing for speaking. We showed participants real-world scenes and asked them to either (1) describe the scene, or (2) describe the actions that could be carried out in the scene. During the viewing period we recorded both eye-movements and speech. We found that scene meaning, not visual salience, guided visual attention in both tasks. The advantage held when speakers were disfluent. When speakers described the color of objects in the scene, meaning and saliency were equally poor predictors of visual attention. There were differences in speech planning across tasks, as measured by speech onset, offset, and utterance duration. Speech began later and ended earlier when speakers described actions, suggesting that this task was more difficult. Additional work is underway to understand the interaction of these two systems.

Manuscripts related to this project: Henderson et al., 2018 (under review), Ferreira & Rehrig (in prep).


Auditory Sentence Comprehension Using Manipulated Acoustic Cues

with Sten Knutsen, Nicolaus Schrum, and Karin Stromswold

Stromswold et al. (under review) and Rehrig et al. (2015) showed that native English speakers consistently produce acoustic cues to progressive active and passive sentence structure. Stromswold et al. (under review) and Stromswold et al. (2016) found that listeners are able to use acoustic information to predict the syntactic structure of an utterance (active or passive) even before hearing the suffix on the verb.

In an eye-tracking comprehension study, we used the visual world paradigm to investigate what happens to listeners' predictions when crucial acoustic information is digitally manipulated in a way that contradicts listeners' expectations. The findings suggest that listeners do not use verb stem vowel duration to infer the syntax of the sentence in an online comprehension task. A gating study (similar to Stromswold et al., 2016) is underway to test whether listeners use this information differently depending on the task.


Previous Research Projects 

Uncertainty in Semantic Search

with Michelle Cheng, Brian McMahan, and Rahul Shome

Imagine a scenario where you have misplaced your favorite coffee mug somewhere in your home, and are in dire need of coffee. Where do you search for your mug? How do you use the knowledge you have about the environment to help you search? Do you find yourself searching in the same location over and over? Previous research suggests that searchers rely heavily on prior knowledge in these search decisions, even when recent experience (e.g., finding your coffee mug in the trash can) contradicts prior knowledge.

In this experiment we empirically estimated prior knowledge about indoor scenes (a kitchen and a living room) and used these estimates to place common household objects in an interactive scene. Participants then searched for the objects under three probabilistic conditions: the object locations were either likely, unlikely, or completely random. We found individual differences in the ability of searchers to override their prior knowledge when objects locations were unlikely (contradicting prior knowledge). Specifically, some searchers were able to learn from recent experience and discount prior knowledge when prior knowledge is not useful, while others failed to do so. This suggests that some—but not all—searchers rely more strongly on their prior knowledge than on recent experience, even when doing so hurts search performance. 

Poster related to this project: Why are the batteries in the microwave?: Use of semantic information under uncertainty in a search task.

Acoustic Correlates of Syntax

with Eleonora Beier, Elizabeth Chalmers, Nicolaus Schrum, and Karin Stromswold

When native English speakers read sentences out loud, they can leave subtle cues indicating what the rest of the sentence will be like even before reaching the parts of the words that would otherwise give it away. For example, the sentence fragment The pig was kiss is ambiguous:

Stromswold et al. (under review) showed that a native English speaking adult with linguistics training said the verb stem (e.g., kiss) for longer when it occurred in a passive sentence, and other native English speakers who listened to sentence fragments were able to use the verb stem lengthening cue to correctly predict the ending of the sentence. We extended this work by asking 7 naїve participants to read active and passive sentences aloud. All of the participants showed passive verb stem lengthening, and the size of the effect varied for different verbs. 

Poster related to this project: Robust acoustic cues indicate upcoming structure in active and passive sentences.

Drawing Comparisons between
Drawing Performance and Developmental Assessments


with Carine Abraham, Chandni Patel, and Karin Stromswold

For over a century researchers have been using human figure drawing tasks to assess the intelligence of children. Among these are variations of the Draw-A-Person task, such as the DAP:IQ. In these tasks, children are asked to draw pictures of themselves. Afterward, the drawings are scored based on the presence or absence of certain key features. For example, an eye drawn with eyelashes earns more points than one without eyelashes. The DAP:IQ has been validated against standardized IQ tasks, but critics argue that the relationships between DAP:IQ scores and IQ scores derived from other methods are weak. 

In this study we asked what skills are tapped by the DAP:IQ task and what other factors might contribute to performance in human figure drawing tasks. 4 and 5 year old children completed the DAP:IQ task along with a battery of developmental assessments. Our findings suggest that the DAP:IQ primarily taps fine motor skills, not cognitive ability as one may expect of an intelligence task. Previous studies may have found an illusory relationship between intelligence and DAP:IQ performance because drawing ability and intelligence may develop in parallel. Had assessments of fine motor skills been included in the comparison, the apparent relationship between intelligence and drawing ability would most likely fall apart.

Publication related to this project: What does the DAP:IQ measure?: Drawing comparisons between drawing performance and developmental assessments.

Categorical and Noncategorical Perception of Motion

with Padraig O'Seaghdha and Barbara Malt
When watching a person using a treadmill at incrementally increasing speeds, people switch from labeling the motion as walking to running at the same time. This categorical switch from using one label to another occurs even for speakers of different languages, using the equivalent terms in their respective tongues (Malt et al., 2008).

In order to test whether the speed of motion or the biomechanics of the motion (the way the limbs move in space while walking or running) are responsible for the abrupt naming transition, we showed participants videos of a person using a treadmill or an elliptical trainer at equivalent speeds and asked them to label the motion as walking or running. When the person was using a treadmill, people labeled the transition categorically as previous studies had found. However, when the person was using an elliptical trainer, the transition in labeling was gradual and skewed toward the label running. This suggests that the biomechanics of the gait transition are responsible for the categorical naming pattern, not the speed of the motion.

Awards and grants related to this project: Williams Prize for Excellence in Writing, Lehigh University College of Arts and Sciences Undergraduate Research Grant, Lehigh University Forum Student Research Grant