Faculty and Labs

Samira Anderson


The Hearing Brain Lab investigates how the brain processes speech, using behavioral and electrophysiological testing. Through these studies, we aim to answer the following broad questions:

 1.  Why do older listeners have difficulty understanding speech in challenging listening environments?

2.     Can auditory training improve the brain’s ability to process speech?

3.     Can objective assessment methods be used to maximize performance with hearing aids and cochlear implants? 

Learn more here.

Luke Butler


Research in the Cognition & Development Lab focused on the development of children’s empirical reasoning. Students in the Cognition & Development Lab are directly involved in all aspects of the research process, including stimuli creation, data collection and coding, and reading scientific papers. 

Learn more here.

Catherine Carr

The brain uses time differences (ITDs) between the two ears to localize the sound. The Carr lab studies the neural circuits underlying the computation of ITD in barn owls and other reptiles. In barn owls, we have shown that ITDs are translated into location in space in the brainstem. Detection of these time differences depends upon two mechanisms of general significance to neurobiology, delay lines and coincidence detection. Incoming axons form delay lines to create maps of ITD in nucleus laminaris. Their postsynaptic targets act as coincidence detectors and fire maximally when the interaural time difference is equal but opposite to the delay imposed by the afferent axons. Similar principles guide sound localization circuits in other reptiles.

Current research is focused on models of delay line-coincidence detector circuit, on the assembly of the map of sound localization during development and on how such circuits evolve. All projects develop from initial behavioral observations into systems, cellular and molecular levels of analysis. Learn more here.



Melissa Caras

Practice can improve our ability to detect, discriminate, and identify sounds. The goal of the Caras lab is to understand how this transformation, from auditory novice to auditory expert, is implemented in the brain. Our  primary objectives are to (1) reveal the neural circuits that support the emergence and maintenance of auditory expertise, (2) determine how the organization and function of these circuits change with age or as a result of hearing loss, and (3) use this information to develop or optimize approaches for improving hearing in both health and disease.                  Learn more here.

Carol Espy-Wilson

Dr. Espy-Wilson's research is in speech communication. She combines knowledge of digital signal processing, speech science, linguistics, acoustic phonetics and machine learning to conduct interdisciplinary research in several speech-related areas including speech and speaker recognition, speech production, speech enhancement and single-channel speech segregation.  She also analyzes speech as a behavioral signal for emotion recognition, sentiment analysis and the detection and monitoring of mental health.   Learn more here.

Yasmeen Faroqi-Shah

We focus on understanding the neural underpinnings of language, with a particular focus on language breakdown following brain injury (aphasia) and bilingualism. This includes improving assessment and speech-therapy outcomes for persons with aphasia, examining the relationship between language and cognitive abilities in persons with aphasia, training-induced neural plasticity, and language in bilingual speakers, particularly with reference to language mixing and and word retrieval. This research uses a variety of experimental techniques such as language sample analysis, behavioral response times, and neuroimaging. Learn more here and here.

Nik Francis


How do we listen? Asked another way, which neural mechanisms underlie how we perceive, remember, and attend to sound?  By combining methods in brain imaging, electrophysiology, animal behavior, and data analysis, the Francis lab aims to clarify the neural mechanisms of listening and advance our understanding of how brain function relates to behavior. Learn more here.

Sandra Gordon-Salant

Speech perception difficulties of older adults in complex listening situations are ubiquitous.  The focus of the Hearing Research Lab is to understand the relative contributions of peripheral and central auditory abilities, cognitive capacity, and stimulus characteristics to age-related speech understanding problems.  Our recent studies investigate the benefit of auditory-cognitive training on older listeners' ability to understand challenging speech signals, including rapid speech, foreign-accented speech, and speech in noise. Learn more here.

Matt Goupell

Our lab studies how to improve hearing with a bionic auditory prosthesis called a cochlear implant. We investigate practical problems like how to improve understanding speech in noise with spatial hearing. We aim to understand hearing deficits with cochlear implants caused by the device, the initial peripheral encoding, or central neural processing. Students are involved in a range of activities that includes human subjects testing (audiometric hearing evaluations, behavioral, cognitive, electrophysiological testing) and data analysis. Opportunities for computational modeling are available to those interested. Learn more here.


Eric Hoover

In the Hearing Technology Lab, our goal is to improve the diagnosis and treatment of hearing loss by evaluating how the provision of hearing healthcare can be more efficient and more consistent with the values of the patient population. Projects in our lab use qualitative and quantitative methods including patient interviews, content analysis, and behavioral hearing assessment, and we focus on adults. Learn more here.



Yi Ting Huang

We study how people speak and listen during communication and how this varies with their experiences with languages, social groups, and topics. Three projects that illustrate these themes include: 1) how family roles, routines, and responsibilities contribute to class differences in language use, 2) how large language models like ChatGPT assess a person’s expertise based on how they talk, 3) how video-calling technology can support conversational dynamics between neurotypical and autistic adults. Learn more here.

Bill Idsardi

Bill Idsardi's research is on speech sound systems and how they function as a "mental address system" for words. He and his students do analyses on various languages and investigate speech perception using behavioral experiments and brain imaging (MEG).

Veronica Kang

We are interested in studying how language and social communication interventions can be implemented by natural agents such as caregivers, siblings, and peers using play, daily routines, and other child-preferred activities. We study how evidence-based practices can be culturally adapted and disseminated within immigrant communities. Current projects include (a) Program for Meaningful Interaction and Social Engagement for young children and youth (PROMISE), a summer program for Asian American Autistic children and youth and (b) Korean Autism Focused Intervention Resources & Modules (K-AFIRM), a culturally adapted caregiver training of Naturalistic Developmental Behavioral Interventions (NDBI) strategies targeting early social communication and language skills for Korean toddlers and preschoolers with a recent diagnosis of autism or those on the waitlist for diagnosis.      Learn more here.

  



Rochelle Newman

We are interested in better understanding how infants acquire spoken language, and how perception of language changes with development. Learn more here. We are particularly interested in how listeners deal with "difficult" listening situations, such as when there is noise in the background, or when the speaker has an unfamiliar accent.  In addition to this primary research, our lab also examines how concussion impacts language (learn more here), and how our canine companions understand spoken commands. Learn more here.


Jared Novick

We study how adults (and sometimes children) process and understand language in real time, as it unfolds moment by moment. We aim to answer these sorts of questions, and use a range of methods like eye-tracking and EEG to do so:  (1)   How do non-language cognitive abilities like memory and attention contribute to how we interpret language input?; (2) How do bilinguals represent two languages in one mind, and how do bilingual behaviors (like hearing a code-switch) affect attention to and memory for information? (3) How do normal differences between people (like a good memory vs. a better memory), and even differences within people (like whether someone is feeling alert or not), impact how accurately they perceive and comprehend language? Learn more here.

José Ortiz

We study issues related to the identification of communication disorders in bilingual children. Part of our work focuses on the measurement of disability identification trends for heterogeneous groups of bilingual children in US schools. We also study how technology-enhanced assessment tools can be used to assist in the identification of communication disorders. Our goal is to conduct research that will have a meaningful impact clinical practice.

Courtney Overton

Courtney Overton is one of our REACH career mentors; she is an SLP and Assistant Clinical Professor in the Dept. of Hearing and Speech Sciences and serves as the Director of the Language-Learning Early Advantage Program (LEAP).  She  is also the Founder & CEO of Speech of Cake, a private practice in Alexandria, Virginia that specializes in treating speech sound disorders and dyslexia. Her dissertation research focused on text selection practices for secondary students with learning disabilities and the importance of representational texts.

Danielle Powell

Our research is at the intersection of Audiology and hearing care, gerontology and older adults, dementia, caregiving, health services, and public health. This research provides experience with data already collected (i.e., epidemiologic studies, electronic medical record data) to understand how hearing impacts overall health and function at a population level, or though structured interviews with target groups of people. Our goal is to provide a person-centered perspective to guide research and implement findings at a public health level through interventions or programs which strive to improve our understanding of how to provide hearing care that meet the specific needs of more vulnerable groups. Learn more here.

Nan Ratner

We study typical and disordered child speech/language development using computer-assisted sample analysis. Our current work explores recovery from late talking and stuttering  in toddlers  and preschoolers, the role of parent input in communication development, and how to create clinical assessments that remove bias when working with families who speak diverse varieties of English. Learn more here.

Rachel Romeo

We investigate how children’s early experiences—both favorable and adverse—influence their neural, cognitive, and academic development. Most of our work is focused on understanding individual differences typical/atypical language and literacy development, but we are also interested in how language relates to executive functioning, socioemotional cognition, and mental health across development. We use a variety of methods including cognitive assessments, brain imaging (fMRI and fNIRS), observational measures of real-world language environments. RAs assist with collecting behavioral and neural data from children and families, processing/ analyzing data, and sharing findings and best practices with our local community partners. Learn more here.

Jonathan Simon

Our interdisciplinary lab studies how the brain processes speech, and how the brain processes different aspects of speech (sounds, language, meaning) differently in different parts of the brain. We use magnetoencephalography (MEG) to silently and non-invasively scan subjects while they listen to people talking. We are especially interested in how the brain processes speech when surrounded by noise, especially other people talking at the same time.                       Learn more here.



Bob Slevc

Our lab studies how we process the complex sound and structure characteristic of both language and music. This includes studies focusing on language processing (including talking - i.e., word and sentence production), studies focusing on sound and music perception, and studies directly comparing linguistic and musical processing. We investigate these questions with behavioral experiments, cognitive neuroscientific methods (mostly EEG and MEG), and neuropsychological studies (investigations of linguistic and musical perception in individuals with brain damage).                        Learn more here.

Ana Taboada Barber

At the READ Lab we study how bilingual – and multilingual – children engage their growing minds in reading tasks. We focus on literacy development in linguistically diverse students from cognitive and motivational perspectives. From a cognitive perspective we study how linguistically diverse students’ literacy development is affected by language and domain-general cognitive variables such as Executive Function (EF) skills. From a motivation perspective we study how variables that affect reading engagement in the general population (such as autonomy support or self-efficacy) exert their influence in the reading development of students who speak more than one language. Learn more here. 

Eliza Thompson


Eliza Thompson  is one of our REACH career mentors; she is an SLP and Assistant Clinical Professor in the Dept. of Hearing and Speech Sciences.  She has had a long standing interest in the processes of communication with a focus on child language development and emergent literacy intervention.