Topic Area
AUD19: Understanding the auditory brain with neural networks
Topic leaders
- Shihab Shamma, University of Maryland, College park (sas@umd.edu )
- Mounya Elhilali, Johns Hopkins University (mounya@jhu.edu)
- Malcolm Slaney, Google (malcolm@ieee.org )
Invited guests
- Behtash Babadi, University of Maryland, College Park, USA
- Alain de Cheveigné, Ecole Normale Superieure, Paris, France
- Fred Dick, University of London, UK
- Tom Francart, KU Leuven, Belgium
- Shigeto Furukawa, NTT, Japan
- Lars Hausfeld, Maastricht University, Netherlands
- John Hershey, Google, USA
- Jens Hjortkjær, Technical University of Denmark, Denmark
- Lori Holt, Carnegie Mellon University, USA
- Antje Ihlefeld, New Jersey Institute of Technology, USA
- Aren Jansen, Google, USA
- Ed Lalor, University of Rochester, USA
- Lisa Margulis, Princeton University, USA
- Nima Mesgarani, Columbia University, USA
Goals
We aim to use machine-learning models to better understand how the brain acquires and processes sounds, both simple diagnostic and environmental sounds, as well as complex sounds within symbolic cognitive systems such as speech and music. We have had a lot of success in recent years decoding brain functions such as auditory attention. We would like this year to extend this work and enrich the decoding toolboxes developed thus far with new and more advanced versions based on the latest innovations in deep networks and machine learning in general. However, we plan to introduce new projects that build upon these decoding tools.
Projects
1. Implicit learning
The project focuses on implicit (or statistical) learning of music and language. In this effort, we shall utilize advanced computational models of music and speech sequences (based on markov chains or DNN’s) to characterize the rate and the extent of learning that humans experience upon passive exposure to new music or language. We shall use EEG recordings to “predict” novel material and determine from the prediction accuracy or deviations how much and the exact nature of the acquisition that has taken place. Our goal in Telluride will be to develop the necessary decoding algorithms and tools to accomplish this task, and then hope to exploit them further after the workshop.
2. Auditory-motor associations
The project concerns the acquisition of auditory-motor associations. This project will build upon our previous but unsuccessful attempt years ago at decoding of “imagined” speech and music. We have learned a lot since, and so in this round we will record EEG from subjects listening to a sentence, speaking it silently, and speaking it loudly. The musical equivalent will be to play a well-practiced piece on a silent keyboard, versus listening to the music and playing it loudly. We shall use the recordings to determine whether there is close correspondence between the recordings in the three conditions, and specifically, whether the silent speech and music induce similar patterns of activation to the pure listening conditions.
3. Categorical perception
This project concerns categorical perception of sound. Here, we will attempt to decode neural signals while subjects perceive the same sounds while being categorized in totally different contexts. The goal is to see if there are correlates of categorical perception that we can tap into. For example, listening to the same words while we classify them as coming from a male or female in one context, or whether they share the same meaning in another context. Do the words that share the same category evoke a similar signature response to allow us to guess the category from the EEG?
Useful tools
- Telluride decoding toolbox: set of Matlab and Python tools to decode brain signals into sensory signals. The tools explore auditory or visual stimuli and neural data acquired with EEG, MEG, ECoG or any other neural response.
- References:
- Skerritt-Davis B, Elhilali M (2018) Detecting change in stochastic sequences, PLoS Comput Bio 14(5):e1006162. doi: 10.1371/journal.pcbi.1006162.
- Pearce MT (2018) Statistical learning and probablistic prediction in music cognition: Mechanisms of stylistic enculturation. Ann N Y Acad Sci. doi: 10.1111/nyas.13654.
- Bretan M, Oore S, Eck D, Heck L (2017) Learning and Evaluating Musical Features with Deep Autoencoders. ArXiv.
- Pearce MT, errojo Ruiz cM Sapasi dS Giggins aG JBhattacharya J (2010) Unsupervised statistical learning underpins computational, behavioural, and neural manifestations of musical expectation. NeuroImage 50, pp. 302–313.
- Martin S, Mikutta C, Matthew K. Leonard, Hungate D, Koelsch S, Shamma S, Chang E, Millán J, Knight R, Pasley B (2017) Neural Encoding of Auditory Features during Music Perception and Imagery, Cerebral Cortex, pp. 1–12.