AUD20

Auditory learning in brains and machines

Topic Leaders

Invitees

  • (To Be Determined)

Goals

We aim to use machine-learning models to better understand auditory learning in the brain using both simple diagnostic and environmental sounds, as well as complex sounds such as speech and music. The topic area explores how we learn, both passively and actively, as well as how we use what we learn to inform perception. This effort focuses on the interface between perception, production, and cognition and aims to inform analysis of complex sounds in neuromorphic machines for real-time processing. We will expand our earlier work on developing powerful tools for decoding brain function such as auditory attention.

This year, we will extend this work and enrich the toolboxes with new and more advanced versions based on the latest innovations in deep networks and machine learning.

Projects

  1. Active and Passive Learning of Music and Language.

The project explores advanced computational models of music and speech sequences (based on Markov chains or DNNs) to characterize the rate and the extent of learning that humans experience upon passive exposure to new music or language; or when attentional focus is deployed to specific patterns or targets in the sensory input. We shall use EEG recordings to “predict” novel material and determine from the prediction accuracy or deviations how much and the exact nature of the acquisition that has taken place. Our goal in Telluride will be to develop the necessary decoding algorithms and tools to assess the degree of learning, and the resolution of what is encoded in memory (learned patterns). In addition, we want to decode attentional signals, and predict the perceptual qualities that trigger attention switches.

2. Learning of Auditory-Motor Associations.

This effort examines the possibility of decoding audio from motor responses in brain responses hence developing tools to disambiguate different neural signals from poorly segregated activations typically obtained with EEG. This effort builds on previous attempts in past workshops to decode “imagined” speech and music. There is a strong association between motor command and acoustic signals in speech (engaging vocal tract articulators) and music (playing an instrument). Our goal is to explore commonalities in brain activations when subjects listen to a sentence, speak it silently or speak it loudly. The musical equivalent will be to play a well-practiced piece on a silent keyboard, versus listening to music and playing it loudly. This project invokes a lot of concepts in sensorimotor integration and embodied behavior, two topics of great interest to other groups in the workshop.


Useful tools (preliminary list)

  • Telluride decoding toolbox: set of Matlab and Python tools to decode brain signals into sensory signals. The tools explore auditory or visual stimuli and neural data acquired with EEG, MEG, ECoG or any other neural response.
  • References:
    • Skerritt-Davis B, Elhilali M (2018) Detecting change in stochastic sequences, PLoS Comput Bio 14(5):e1006162. doi: 10.1371/journal.pcbi.1006162.
    • Pearce MT (2018) Statistical learning and probablistic prediction in music cognition: Mechanisms of stylistic enculturation. Ann N Y Acad Sci. doi: 10.1111/nyas.13654.
    • Bretan M, Oore S, Eck D, Heck L (2017) Learning and Evaluating Musical Features with Deep Autoencoders. ArXiv.
    • Pearce MT, Ruiz M, Kapasi S, Wiggins G, Bhattacharya J (2010) Unsupervised statistical learning underpins computational, behavioural, and neural manifestations of musical expectation. NeuroImage 50, pp. 302–313.
    • Martin S, Mikutta C, Matthew K. Leonard, Hungate D, Koelsch S, Shamma S, Chang E, Millán J, Knight R, Pasley B (2017) Neural Encoding of Auditory Features during Music Perception and Imagery, Cerebral Cortex, pp. 1–12.