The workshop is aimed as hands-on non-traditional research conference where the focus is more on open forums and live discussions rather than research talks. The meeting is held over 4 and ½ days where interdisciplinary teams including senior researchers and younger trainees and students work together to pilot new ideas, exchange approaches, and compare methodologies around themes relevant to auditory cognition.
The 2022 workshop is the first in-person edition of the CogHear workshops after a series of online discussions and reading groups held during the COVID pandemic. The workshop aims to promote interdisciplinary collaborations and communication across sensory and cognitive researchers focused on understanding brain function. The meeting provides a forum for knowledge gathering, idea sharing and research dissemination, as well as building bridges between basic science and engineering disciplines.
One of the discussions topics was the possibility of developing an open challenge to the community for decoding EEG/MEG data in response to an attended audio stimulus. The goal of this activity is to push forward the growing research in which auditory attention is decoded from the brain, with potential applications in smart hearing aids. An increasingly popular method in these fields is to relate a person’s electroencephalogram (EEG) to a feature of the natural speech signal they were listening to. This is typically done using linear regression to predict the EEG signal from the stimulus or to decode the stimulus from the EEG. Given the very low signal-to-noise ratio of the EEG, this is a challenging problem, and several non-linear methods have been proposed to improve upon the linear regression methods. In the Auditory-EEG challenge, teams will compete to build the best model to relate speech to EEG. The group explored needs and challenges for such direction as well as possible tasks that would be suitable for this challenge.
Dr. Francart’s group at KU Leuven has been working on a challenge for EEG decoding and have proposed it to the IEEE International Conference on Acoustics, Speech, & Signal Processing, the flagship conference on signal processing applied to speech and audio. The challenge is open to the community for submissions in 2023.
The group discussed used of AAD (auditory-attention decoding) algorithms in clinical research related to hearing loss and coalesced around the concepts that there are main concepts that need to be addressed:
· Role of background noise?
· How many speakers are present?
· How close are the speakers/background noise to one another?
· How often is switched attention needed to different speakers?
There were discussions to adopt the Client Oriented Scale of Improvement to evaluate what situations were most important for patients to have improved with their hearing aids. The group agreed that surveying users and audiologists is critical at this juncture of the research across the community to evaluate the possible implications of AAD in hearing aid technologies. This is an open topic for discussion and the community needs to explore validity and limits of such evaluations.
The group discussed new directions to explore auditory imagery and decoding of brain activity during imagined speech. This is a critical topic in order to push forth possibilities of brain-computer interfaces and its relationship with auditory perception. These questions are a continued topic of interest to the broad community.
The meeting was held at the Hotel at the university of Maryland, College Park, June 6-10, 2022, and was coordinated by the conferences and visitor services at the University of Maryland.
The meeting was attended by 51 attendees from 9 countries (USA, Belgium, Denmark, France, Ireland, Netherlands, Sweden, Germany, and the UK), and participants spanned a wide range of expertise and seniority, including graduate students and postdoctoral trainees. Attendees of the meeting were 25% female.
Scientific steering committee:
Mounya Elhilali (Johns Hopkins)
Shihab Shamma (University of Maryland and École Normale Supérieure)
Malcolm Slaney (Google)
Jonathan Simon (University of Maryland)
Tom Francart (Leuven)
With generous support from the National Institute of Deafness and Communication Disorders (R13DC018475)