Here are brief summaries for some of the research projects we're pursuing:


Acoustic context effects

Each sound has its own acoustic properties (its frequencies, amplitude, and duration), but these properties do not always result in clear perception. Fortunately, perception also uses the acoustic properties of other sounds heard before and after it. Whenever acoustic properties change across sounds, the auditory system magnifies these differences to help us hear them. These are known as acoustic context effects. We study how  changes in sound frequencies (spectral contrast effects, auditory enhancement effects) and changes in sound timing properties (temporal contrast effects, or speaking rate normalization) affect our hearing. We have shown that these context effects are very general, shaping our perception of a wide range of speech and nonspeech (musical instrument) sounds, from very small to very large changes, and in both healthy and impaired hearing. We continue to explore how these context effects are related to each other and where they happen in the auditory system to better understand how, when, and by how much these effects shape everyday speech perception. 


Overcoming variability

Speech sounds have a tremendous amount of acoustic variability, posing a constant challenge for speech perception. For example, recognizing words spoken by various different talkers is more difficult than recognizing the same words spoken by one talker. This process is known as talker adaptation. We have shown that this perceptual difficulty decreases as the amount of acoustic variability across talkers’ voices decreases. We have also revealed identical things to be happening in perception of musical instrument sounds - hearing tones played by various musical instruments is more difficult than hearing tones played by the same instrument. As with speech, the perceptual difficulty again decreases as acoustic variability among musical instruments also decreases. These findings reveal that overcoming acoustic variability in sounds is a general process in auditory perception. 


Expertise in speech perception and music perception 

We are experts at hearing the sounds of our native language(s), in large part because we hear them more than any other sound. How did our perceptual abilities improve as this expertise grew? This is difficult to study in adults because their sensory systems were growing and maturing at the same time as they were gathering this experience. Instead, we can ask these questions by investigating musical background, which can accumulate while adult sensory systems are already mature. Experienced musicians have heard and practiced music far more than less-experienced musicians or nonmusicians. Many studies have shown that musical training can improve pitch perception, but there is great debate about what other perceptual abilities might improve along with musical training (the so-called “musician advantages”). So far, we have learned that musical background lessens the challenges posed by acoustic variability, improves perception of relatively rare sound properties in musical instruments, but does not improve or alter spectral context effects in musical instrument perception. We are also interested in how listeners use lower-level (acoustic) and higher-level (linguistic, particularly semantic) context when perceiving lower-level sounds (individual speech sounds, individual musical tones) and higher-level sounds (word pairs and sentences, harmonies and melodies), comparing expertise effects within and across these domains. 


Natural sound statistics and perception 

Speech and music sounds are incredibly complex, varying along many different acoustic properties. But, this variability is not random; these sounds possess many different types of predictability and structure. But, which of these matter for our perception? We could do endless analyses of sound properties, but our interest is in sound properties that matter for perception. Recently, we learned about when and how often sound statistics produce spectral context effects in speech perception and music perception. We discovered important similarities and differences in perception when we used carefully constructed “lab” sounds in experiments versus more naturalistic “everyday” sounds. These approaches bring our experiments much closer to modeling everyday listening conditions, supporting the important role that these context effects have in our daily listening.


Perception amidst hearing impairment

Millions of individuals have hearing loss, which profound affects quality of life. Researchers continue to explore ways to better understand and ultimately lessen the consequences of hearing impairment. In our research, we explore how listeners with impaired or atypical hearing use surroundings sounds to help their perception of speech sounds and music sounds. We use acoustic simulations of hearing loss or cochlear implant processing to reveal how perception changes for healthy-hearing listeners; we then test listeners with impaired hearing directly to reveal similarities and differences across healthy and impaired auditory systems. Our hope for this research is to inform new signal processing techniques in digital hearing aids and cochlear implants to improve how these listeners use surrounding sounds to aid their perception.