Claire Pelofi | NYU
Malcolm Slaney | Stanford
Claire Pelofi (NYU)
Malcolm Slaney (Stanford)
Shihab Shamma (Maryland and ENS)
Mounya Elhilali (Johns Hopkins)
Laura Gwilliams (Stanford)
Greg Hickcock (UC Irvine)
Nima Mesgarani (Columbia)
Ed Lalor (Rochester)
Sam Norman-Haignar (Rochester)
Alain de Cheveigne
Jean-Remi King (Meta and ENS) - Remote
We aim to use AI models to better understand how the human brain processes sounds, both simple diagnostic and environmental sounds, as well as sounds organized into complex systems such as speech and music. We have had much success in recent years understanding and decoding brain functions such as auditory attention. This year we would like to extend this work and enrich the decoding toolboxes developed thus far with new and more advanced versions, building upon the latest developments in deep networks and machine learning in general.
We plan to organize our work around two different threads. On one hand, we will focus on the properties of speech and music and how these are encoded into brain data. As they serve different purposes and display different organization principles, they may rely on different kinds of processing networks. For both these signal types, the second thread will combine our auditory work with other modalities, such as vision and motor, and we will pursue real-time demos. In 2023, merging auditory processing with other modalities allowed us to tackle real-time decoding and fostered fruitful collaboration with other Telluride groups. This year, our goal is to better understand the processing pipeline in the brain for both speech and music, using modern tools like decoding software and large language models (LLMs).
Project 1:
Project 2:
BrainVision will loan us EEG hardware again this year. We anticipate their representative will come too to help with experiment design and hardware. We will also use portable dry-electrode EEG systems (This might also be interesting to other groups since it is kind of a poor-man’s EEG.) We’ll bring EEG/MEG data and hopefully some ECoG datasets, as well as decoding software.