6th Taiwanese Music and Audio Computing Workshop
Music, Embodiment, and Artificial Intelligence
- Keynote speaker | Masataka Goto (AIST)
- Date | April 17, 2019 (Wed.)
- Time | 14:00 - 18:00
- Venue | N106, Institute of Information Science, Academia Sinica 中研院資訊所 N106
The 6th Taiwanese Music and Audio Computing Workshop will take place at the Institute of Information Science on 17 April 2019. The theme this year is Music, Embodiment, and Artificial Intelligence.
We are honored to invite Masataka Goto, a world leading researcher on MIR, to this workshop. We also invite domestic speakers from AS, NTHU, and NCKU to share recent MIR studies in Taiwan.
Program
14:00-15:20 | Keynote speech
Masataka Goto Intelligent Music Interfaces Based on Music Signal Analysis
15:20-15:45 | Coffee break
15:45-16:15 | Li Su Automatic music transcription: state of the art and recent development
16:15-16:40 | Yu-Fen Huang The embodiment of musical mind: Decomposing musical conducting movement
16:40-17:00 | Amy Hung Multi-task learning model for multi-pitch streaming and music style transfer
17:00-17:20 | Hsueh-Wei Liao Lip Buzzing Modeling with Physical-informed Neural Network
17:20-17:40 | 邱晴瑜 Expressive violin synthesis and concert of virtual violinist
17:40-18:00 | 蕭佑丞 Musical Phrasing Based on GTTM Using BLSTM Networks
18:00-18:30 | Discussion
Keynote speech
14:00-15:30
Masataka Goto | 後藤真孝
National Institute of Advanced Industrial Science and Technology (AIST)
Title
Intelligent Music Interfaces Based on Music Signal Analysis
Abstract
In this talk I will present intelligent music interfaces demonstrating how end users can benefit from automatic analysis of music signals (automatic music-understanding technologies) based on signal processing and/or machine learning. I will also introduce our recent challenge of deploying research-level music interfaces as public web services and platforms that enrich music experiences. They can analyze and visualize music content on the web, enable music-synchronized control of computer-graphics animation and robots, and provide an audience of hundreds with a bring-your-own-device experience of music-synchronized animations on smartphones. In the future, further advances in music signal analysis and music interfaces based on it will make interaction between people and music more active and enriching.
Bio
Masataka Goto received the Doctor of Engineering degree from Waseda University in 1998. He is currently a Prime Senior Researcher at the National Institute of Advanced Industrial Science and Technology (AIST). In 1992 he was one of the first to start working on automatic music understanding and has since been at the forefront of research in music technologies and music interfaces based on those technologies. Over the past 27 years he has published more than 250 papers in refereed journals and international conferences and has received 47 awards, including several best paper awards, best presentation awards, the Tenth Japan Academy Medal, and the Tenth JSPS PRIZE. He has served as a committee member of over 110 scientific societies and conferences, including the General Chair of the 10th and 15th International Society for Music Information Retrieval Conferences (ISMIR 2009 and 2014). In 2016, as the Research Director he began a 5-year research project (OngaACCEL Project) on music technologies, a project funded by the Japan Science and Technology Agency (ACCEL, JST).
https://staff.aist.go.jp/m.goto/
15:30-15:55
Li Su
Automatic music transcription: state of the art and recent development
16:15-16:40
Yu-Fen Huang
The embodiment of musical mind: Decomposing musical conducting movement
Abstract
Music conductors serve to contribute their musical interpretations and coordinate musicians’ playing in orchestra. During orchestral conducting, conductors use their body movement to guide expressive variations in music. The conducting pedagogy has summarized a stock of conventionalized “conducting vocabulary” – movements regarded as carrying specific musical instructions. However, in real instances of conducting, conductors’ movement appears to be flexible, and the variations in their movement are far more beyond the conventional descriptions in pedagogy.
We therefore seek to identify expressive cuing from different conductors’ movements. The motion capture technology was applied to collect conductors’ body movement, and interviews were conducted to derive conductors’ musical interpretations. Multiple analysis techniques were adopted to extract the key body parts, key kinematic features, and to identify different types of expressive cuing in conducting. The analysis results indicate that the kinematics of conductors’ body movement is revealing regarding their musical intentions.
16:40-17:00
Amy Hung
Multi-task learning model for multi-pitch streaming and music style transfer
17:00-17:20
Hsueh-Wei Liao
Lip Buzzing Modeling with Physical-informed Neural Network
Abstract
Synthesizing the music instrument sound by physical modeling has is an approach without the requirements of training data. In contrast to physical modeling, supervised-learning based method (I.e., neural network) generate audio samples by learning the feature from the recordings instead of the physics of the sound production.
Recently, some studies have shown the potential of neural network on physical simulation. In this project, we concentrate on the simulation of the lip buzzing, which is the important excitor of the brass instrument. One of the physical models of lip buzzing was introduced by Adachi. Adachi’s model is a two-dimensional model which preforms both swinging and stretching motion. Our goal is a generative neural network for the lip buzzing sound. The architecture of this neural network is physically-informed by Adachi’s model. We hope such kind of specially designed neural network can help improve the ability of generalization. That is, the neural network can generate higher quality audio with wider range of vibrating frequency.
17:20-17:40
邱晴瑜
Expressive violin synthesis and concert of virtual violinist
17:40-18:00
蕭佑丞
Musical Phrasing Based on GTTM Using BLSTM Networks
LOC
Li Su 蘇黎 | lisu [at] iis.sinica.edu.tw