Research Areas

ONGOING RESEARCH

Sound Events

How is high-level auditory information about our environment organized? There is a strong theoretical basis for connecting auditory perception with events rather than objects. It is a "tree falling in the forest" that is heard, not just the tree. Sound is generated by the physical interactions of objects, surfaces, and substances – in other words, events. The sound waveform contains a great deal of potential information about its sources properties. However, no single acoustic feature specifies a particular object or action. Information about sound sources is complex and time-varying, and it is not known to what degree or in what form it is exploited by human listeners. My research examines the human ability to understand what events are happening in the environment through sound. Perceptual experiments address whether there is an auditory organization of events that can be used to predict psychological phenomena such as prototypes or exaggerations, and whether audition plays a significant role in the perception of multi-modal events. This basic research (some of which was funded by the NSF) relates psychological performance to acoustic properties and high-level auditory information. The results of this research may have the potential to enhance processing for hearing aids and improve auditory displays, both for virtual reality and for visually impaired computer users. I believe that immersive and interactive human/machine interfaces of the future will need to make advances in auditory interfaces as well as addressing the interaction between audition and vision.


Unwanted Sounds

Ongoing work involves the perception of sound categories and the effects of unwanted sounds. Some of this research is funded by the REAM Foundation on the auditory disorder of misophonia. Some collaborative research on the effects of traffic noise has been funded by NSF.


Sound Classification

Collaborative applications are being developed to improve the performance of a machine learning system for sound event classification.


Hearing Impairment

Collaborative research is establishing a baseline for action identification in everyday sounds on patients undergoing Cochlear Implant surgery


Auditory-Visual Interactions

Current studies are investigating the cognitive parameters that affect the integration of auditory and visual events. For example, sometimes visual and auditory stimuli are simultaneous even though the don’t arise from the same event: how do we figure this out? Conversely, sometimes the sights and sounds do belong together even though they are not strictly simultaneous: how do we know to glue them together across time and what are the limits?


Previous Research

IMPROVING SPATIAL NAVIGATION USING SOUND

We collaborated with professors in Electrical Engineering at Carnegie Mellon with the aim of improving human spatial navigation using sound. Echoes provide important acoustic information about the environment that is extremely effective for the navigation of certain animals (e.g., bats and dolphins). Because echoes are complex, humans do not normally use echolocation; however, echo information is in fact utilized by some blind people. We harnessed technology in order to make echo information accessible to the blind in an effort to help them to learn to use echoes. Our approach is to offer a free smartphone game that gives people experience with navigating through a virtual maze by using echoes. In the Auditory Lab we focused on discovering the human sensitivity to echo information and how this can be extended to help design and improve training programs and devices.

Collaborators: Prof. Pulkit Grover and Prof. Bruno Sinopoli, Electrical and Computer Engineering, CMU.

Funding: Google, CMU undergraduate research training award.

We made a training game as a smartphone app.

Android Users:

Android users should email The Auditory Lab at CMU to get the android app by email. Email: info@auditorylab.org

iPhone Users:

iPhone users should email The Auditory Lab at CMU for details. You would need to provide a valid AppleID to get an invitation to install the app on your phone. Email: info@auditorylab.org



AUDIO-MOTOR PRIMING

We explored a new form of auditory-motor priming. Motor priming exists if an action is performed more rapidly after the presentation of facilitating cues than after the presentation of interfering cues. We hypothesized that environmental sounds could be used as cues to create motor priming. To create facilitation, we devised a congruent priming sound that was similar to the sound that would be made by the gesture that was about to be performed. To create interference, we devised an incongruent sound that would not normally be made by the gesture that was about to be performed. Using this paradigm we found evidence of auditory-motor priming between environmental sounds and simple gestures. Additionally, we found evidence for auditory-motor priming over a range of conditions.

NEURAL BASIS OF SOUND IDENTIFICATION

We investigated the cognitive neuroscience of the auditory system's ability to identify the causes of sounds. The experimental question we addressed is which neural networks are preferentially activated when subjects shift the focus of their attention toward different aspects of the source of sounds. This research was funded by a Rothberg Award.

HEARING AIDS

We tested various hearing aid algorithms to reduce noise and enhance speech intelligibility. This research was funded by the Rhode Island Research Alliance's Science and Technology Advisory Council. We tested combinations of pre-processing strategies to determine which ones provide the most benefit to users. Both normal-hearing and hearing-impaired listeners tried to understand speech under quiet and noisy conditions in the laboratory. The goal was to influence development of future hearing aids.