AVM22: Cross-modality brain signals: auditory, visual and motor
Claire Pelofi & Malcolm Slaney
Cornelia Fermüller & Ryad Benosman
The focus of this workgroup is on understanding the interplay between neural representations of auditory, visual, and motor cues and statistical knowledge of speech, music and action. To this aim, the group will pursue two complementary efforts. A first neuroscience project will investigate predictive coding in the multisensory context of watching videos. Before the Workshop, data will be collected from subjects watching the same movie in different languages and videos of violin players using the signals of EEG, MEG, fMRI and pupillometry. Using this data, the aim will be to investigate questions of auditory and visual saliency, audio-visual integration and its integration with motor areas and attentional decoding. A second computational perception project will study aspects of motor learning using vision and audio in the context of learning to play the violin. We will develop software to monitor a student’s gross and fine motor movements, i.e., the posture, which is a challenge for learners for many years, the fast finger movements on the left hand, and the bow movement with the right hand and arm. Using a dataset of students playing repeatedly the same pieces, collected with a motion capture system, vision, event sensors, and audio, we will develop causal models and use techniques from visualizing deep networks to gain insight into how specific movements and changes in movement affect the sound features.