Research Areas

ONGOING RESEARCH

Sound Events

How is high-level auditory information about our environment organized? There is a strong theoretical basis for connecting auditory perception with events rather than objects. It is a "tree falling in the forest" that is heard, not just the tree. Sound is generated by the physical interactions of objects, surfaces, and substances – in other words, events. The sound waveform contains a great deal of potential information about its sources properties. However, no single acoustic feature specifies a particular object or action. Information about sound sources is complex and time-varying, and it is not known to what degree or in what form it is exploited by human listeners. My research examines the human ability to understand what events are happening in the environment through sound. Perceptual experiments address whether there is an auditory organization of events that can be used to predict psychological phenomena and whether audition plays a significant role in the perception of multi-modal events. I use sound synthesis in order to ask some of these questions. This basic research (some of which was funded by the NSF) relates psychological performance to acoustic properties and high-level auditory information. The results of this research may have the potential to enhance processing for hearing aids and improve auditory displays, both for virtual reality and for visually impaired computer users. I believe that immersive and interactive human/machine interfaces of the future will need to make advances in auditory interfaces as well as addressing the interaction between audition and vision. Click here if interested in participating.


Using Perception in VR to teach about animal conservation and climate

AnimalPOV: A Climate Challenge is a first-person VR experience that simulates an animals perceptual world while it navigates through challenges created by habitat destruction. To create the game, I engaged two teams of students in CMU's Entertainment Technology Center and collaborated with Diane Turnshek (Physics), ETC faculty, and the Center for Transformational Play. The game is used for outreach at local events and educational organizations.


Hearing Impairment, Cochlear Implants and Environmental Sounds

Collaborative research is establishing the relative benefits and deficits of environmental sound perception in people with hearing loss, with and without cochlear implants, relative to those with normal hearing. This was funded by NIH and is led by Dr. Harris at the Medical University of Wisconsin.


Effects of Environmental Noise

Collaborative research is measuring the effects of traffic noise in schools and hospitals. This was funded by the NSF and is led by Dr. Azad a the U. of Florida.


Auditory-Visual Interactions 

Current studies are investigating the cognitive parameters that affect the integration of auditory and visual events with Prof Rosenberg in the CMU School of Design. For example, sometimes visual and auditory stimuli are simultaneous even though they don’t arise from the same event: how do we figure this out? Conversely, sometimes the sights and sounds do belong together even though they are not strictly simultaneous: how do we know to glue them together across space and time, and what are the limits? How powerful is motion as a cue - of the stimuli and/or the observer?


Misophonia - unbearable sounds

Our work investigated ways in which multimodal input connected with the sound sources can alter the effect of unwanted sounds. We found that pupil diameter can be affected by visual input and misophonia. This research was supported by a grant from the Misophonia Research Fund. 


Classification of Environmental Sounds 

I collaborate on research that uses human judgements and sound synthesis to further our understanding of human and machine sound categories. This includes the study of human categorization using real and synthesized sounds, and it has applications for automatic sound event classification and machine learning.


Machine learning and Generative AI

I co-organized a special session, Synergy between human and machine approaches to sound/scene recognition and processing. June 2023. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). https://arxiv.org/abs/2302.09719

Prof. Heller co-organized a  Foley sound synthesis challenge in 2023 and 2024. Detection and Classification of Acoustic Scenes and Events (DCASE) 2023. In 2024, this challenge was expanded to text-to-audio prompts for sound scenes. DCASE Challenge 2024


A collaboration to make perceptually-motivated sound morphs is underway AY24-25 with Prof Donahue in the CMU Department of Computer Science. This is funded by the SONY Research Award Program.


Previous Research

IMPROVING SPATIAL NAVIGATION USING SOUND

We collaborated with professors in Electrical Engineering at Carnegie Mellon with the aim of improving human spatial navigation using sound. Echoes provide important acoustic information about the environment that is extremely effective for the navigation of certain animals (e.g., bats and dolphins). Because echoes are complex, humans do not normally use echolocation; however, echo information is in fact utilized by some blind people. We harnessed technology in order to make echo information accessible to the blind in an effort to help them to learn to use echoes. Our approach is to offer a free smartphone game that gives people experience with navigating through a virtual maze by using echoes. In the Auditory Lab we focused on discovering the human sensitivity to echo information and how this can be extended to help design and improve training programs and devices.  

Collaborators: Prof. Pulkit Grover and Prof. Bruno Sinopoli, Electrical and Computer Engineering, CMU.

Funding: Google, CMU undergraduate research training award.

We made a training game as a smartphone app available on both Android and iPhone.


AUDIO-MOTOR PRIMING

We explored a new form of auditory-motor priming. Motor priming exists if an action is performed more rapidly after the presentation of facilitating cues than after the presentation of interfering cues. We hypothesized that environmental sounds could be used as cues to create motor priming. To create facilitation, we devised a congruent priming sound that was similar to the sound that would be made by the gesture that was about to be performed. To create interference, we devised an incongruent sound that would not normally be made by the gesture that was about to be performed. Using this paradigm we found evidence of auditory-motor priming between environmental sounds and simple gestures. Additionally, we found evidence for auditory-motor priming over a range of conditions.

NEURAL BASIS OF SOUND IDENTIFICATION

We investigated the cognitive neuroscience of the auditory system's ability to identify the causes of sounds. The experimental question we addressed is which neural networks are preferentially activated when subjects shift the focus of their attention toward different aspects of the source of sounds. This research was funded by a Rothberg Award.

 

HEARING AIDS

We tested various hearing aid algorithms to reduce noise and enhance speech intelligibility. This research was funded by the Rhode Island Research Alliance's Science and Technology Advisory Council. We tested combinations of pre-processing strategies to determine which ones provide the most benefit to users. Both normal-hearing and hearing-impaired listeners tried to understand speech under quiet and noisy conditions in the laboratory. The goal was to influence development of future hearing aids.