Programme
This event will take place in Room N106 at the Institute of Information Science, Academia Sinica (中研院資訊所 N106).
10:00 - 10:25
Registration and Coffee
Registration and Coffee
10:25 - 10:30
Opening Address
Opening Address
Tonality, predictive coding, and executive control network
Previous research on musical rhythm showed that listeners’ ratings of subjective pleasure and desire for body-movement are inverted U-shaped functions of the degree of syncopation in music. This result is consistent with a model for predictive coding of rhythmic incongruity, in which the rhythm is the auditory input and the meter is a predictive model for temporal events. My recent brain-imaging study shows an inverted U-shaped relationship between the degree of tonal stability and activity in listener’s executive control network. I suggest that executive functions such as working memory, hierarchical sequencing, conflict resolution, and cognitive flexibility are critical to the processing of tonality in chromatic music.
11:40 - 12:20 | Vincent Cheung (張家銘) - Institute of Information Science, Academia Sinica & Max Planck Institute for Human Cognitive and Brain Sciences
The neurocognitive basis of musical expectancy and pleasure
Although expectancy is thought to play a key role in our appreciation for music, supporting evidence in the literature is surprisingly limited. In this presentation, I begin by exploring the cognitive mechanisms underlying how listeners form musical expectations. By comparing chord expectancy ratings against two computational models embodying contrasting mechanisms, I show that listeners primarily make use of statistically-learnt representations of music structure to generate predictions to upcoming musical events. Next, I demonstrate that listeners' appreciation for music depends not only on the violation of expectations as investigated in most studies in the existing literature, but also on their level of uncertainty in predicting what is to come. Using functional magnetic resonance imaging (fMRI) I further show that this joint effect between chord uncertainty and surprise is related to regions in the mesolimbic reward network as well as the auditory cortex. Together, these provide direct neurocogntive evidence supporting the role of expectations in musical pleasure.
12:20 - 13:30
Lunch
Lunch
What can music tell us about real-world interpersonal coordination?
Coordinating with others is essential for humans during many daily activities, ranging from jointly working on the same task to performing music in an ensemble. However,most of the cognitive science studies were conducted on isolated individuals in constrained settings, it is unclear whether the findings can be generalized to real-world interpersonal settings. The current study aimed to overcome this limitation by measuring body movement and EEG in professional music ensembles in the LIVELab concert hall as a real-world example of interpersonal coordination. We showed that the Granger causality of interpersonal body sway coupling reflects coordination quality, leadership,and emotional expression. Preliminary analyses on EEG using partial directed coherence showed that neural oscillatory couplings might reflect interpersonal sensorimotor predictions and adaptations. Together, these findings provide a novel and easy approach to investigate interpersonal interaction in real-world settings, and can be harnessed to investigate, for example, parent-infant, caregiver-elder, and therapist-patient social interactions.
Musicianship, musical pitch proficiency and hearing in noise performance
Recent research suggests musicians confer an advantage in processing speech under noisy environment. According to the OPERA hypothesis, such benefits may be due to shared neural network between music and speech. However, it remains unclear which aspects of musicianship contribute to such an enhancement. My recent work focuses on an innate musical pitch ability of musicians to see whether it can partially account for performance in hearing in noise tasks. Preliminary findings suggest a modularity-dependent form of hearing-in-noise perception that is mediated by proficiency of musical pitch ability. Findings are discussed in terms of potential underlying neural mechanisms with applications to alleviate speech perception problems associated with aging and certain clinical populations.
15:20 - 15:35
Coffee
Coffee
Communication through body motion and audio in music performance
During music performance, musicians communicate their musical ideas through body motion and performed sound. This talk explores the body motion and audio in music performance using 3-D motion capture and Music Information Retrieval (MIR) related technologies, and discusses how do musicians achieve effective communication using diverse strategies. In the first study, orchestral conductors’ body movement was recorded using 3-D motion capture system, and the kinematic features in their movement were analyzed. In the second study, pianists’ performing audio was analyzed to be matched up with important landmarks in musical structure. In the third study, the audio of Guqin (古琴) was analysed using MIR techniques, and audio features were connected with their left-hand movement during performing. These investigations show that musicians communicate with one another applying diverse strategies in different musical contexts.
16:15 - 17:35 | Student Presentations
Ching-Yu Chiu (邱晴瑜) - National Cheng Kung University & Research Center for Information Technology Innovation, Academia Sinica
Source separation-based data augmentation techniques for improved joint beat and downbeat tracking in classical music
Beat/downbeat tracking have been fundamental and important topics in music information retrieval research. Though deep learning-based models have achieved great success for music with stable and clear beat positions, it remains quite challenging for classical music in the absence of a steady drum sound. In this study, we propose a novel source separation-based data augmentation technique that is tailored for beat/downbeat tracking in classical music. This involves a model that separates drum and non-drum sounds, and mechanisms to perform drum stem selection. We report comprehensive experiments validating the usefulness of the proposed methods.
Bo-Yu Chen (陳柏昱) - Research Center for Information Technology Innovation, Academia Sinica
Neural Loop Combiner: Neural Network Models for Assessing the Compatibility of Loops
Music producers who use loops may have access to thousands in loop libraries, but finding compatible ones is a time-consuming process; we hope to reduce this burden with automation. This study proposes a data generation pipeline with several negative sampling methods to train two deep learning models (CNN and Siamese NN). We conduct a subjective listening test to assess our proposed algorithms' usefulness. The result shows our model outperformed the rule-based baseline model.
Yu-Wei Wen (溫育瑋 ) - National Chung Cheng University
Automatically Composing Bossa Nova through Evolutionary Computation
As a discipline of artificial intelligence, evolutionary computation has been developed to efficiently solve varies optimization problems. In this talk, we will discuss our recent research on automatic music composition. Our music composing system uses genetic algorithm as the melody generator coupled with the automatic accompaniment process to generate Bossa Nova music. The evolutionary composition system are built based on music theory instead of learning from music corpuses. In this study, we found that the combination of evolutionary algorithm and human knowledge on music theory is helpful in automatically generating quality music.
(+ Others TBA)
17:35 - 17:45
Closing and Final Discussion
Closing and Final Discussion