Lecture Concert 

19:30 - 21:00

Program List

JKU, Austria

The Accompanion: an AI-based Piano Accompaniment System


YAMAHA R&D, Japan

Demo of 'Daredemo Piano': An Accompaniment Piano Tuned for Novice Players


Academia Sinica, Taiwan 

Concerto for Two Violins, BWV 1043 by Humans and Virtual Violinists 

Suite of Taiwanese and Korean Songs with Violin Fingering Generation


KAIST AI Performance Team, South Korea

Human-AI Piano Relay Performance (Pf. Jonghwa Park)

Flute Performance Accompanied by AI Pianist with Automatic Cue Detection (Fl. Jaeran Choi)

Classical Voice Accompanied by AI Pianist with Adaptive Accompaniment System (Sop. Yoonji Oh)

Real-time Piano Transcription with Visualization (Pf. Jonghwa Park)


Special Guests

Pianist Jonghwa Park

박종화 피아니스트

Soprano Yoonji Oh

오윤지 소프라노

Program Note

YAMAHA R&D, Japan

Demo of 'Daredemo Piano': An Accompaniment Piano Tuned for Novice Players


We will present a live demonstration of our piano accompaniment tuned for novice players, "Daredemo Piano." When a user plays a melody on the piano, accompaniment by the piano left hand and the strings will play back in sync with the playing.


More information about Daredemo Piano can be found here: https://www.yamaha.com/en/csr/feature/feature_16/

Performer
Akira Maezawa (YAMAHA R&D, Japan)

JKU, Austria

The Accompanion: an AI-based Piano Accompaniment System

'The Accompanion' is an AI-based piano accompaniment system playing live with a human performer on a single computer controlled grand piano.

The program will consist of three parts with a duration of roughly 30 minutes.

Performer
Carlos Eduardo Cancino-Chacón (JKU, Austria)

Academia Sinica, Taiwan

Concerto for Two Violins, BWV 1043 by Humans and Virtual Violinists

Violin (pre-recorded): Yu-Fen Huang
Piano: Yu-Chia Kuo
Virtual musician system and facial expression generation: Ting-Wei
Lin Body movement generation: Hsuan-Kai Kao

We present a system that can animate virtual musicians' performance directly from the music content without motion capture devices and can interact with human musicians. Our system is based on our paper “A Human-Computer Duet System for Music Performance,” which was a Best Paper Award Candidate in the ACM Multimedia Conference 2020. We demonstrate one specific application scenario, by using human's pre-recorded violin audio as the input to generate the following contents of virtual violinists: 1) facial expressions, 2) fingerings, 3) body movements, and 4) visual storytelling shots. In this performance, we show how human-created and computer-generated contents are merged with a vigorous fugue by Johannes Sebastian Bach, which features not only the interaction between two violinists but also between human and computer.

Suite of Taiwanese and Korean Songs with Violin Fingering Generation

Violin: Yu-Fen Huang
Arranger and piano: Yu-Chia Kuo
Violin fingering generation: Wei-Yang Lin

We present a violin fingering generation system that incorporates both audio and symbolic data, allowing users to upload music scores and corresponding recordings to obtain personalized fingerings related to both the symbolic and audio data. The selection of violin fingerings is influenced by factors such as musical context, skill level, and personal preference. Different from previous symbolic-based fingering generation, the proposed system can better capture the personal nuances of musical performance which lies only in the audio data. In this performance, we bring to you a suite collecting Through the Night (밤편지), Loving the Year Around (四季紅), and others, using the fingerings generated by our system.

Performers
Yu-Fen Huang, Yu-Chia Kuo, Ting-Wei Lin, Hsuan-Kai Kao, Wei-Yang Lin (Academia Sinica, Taiwan)

Yu-Fen Huang is a post-doctoral research fellow at Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan. Her research applies Music Information Retrieval (MIR) and AI techniques to explore the expressive elements in musical audio and body movement. Her research topics include: 1) the cross-modal mapping between musical sound and body movement, 2) audio analysis for piano and string performances using AI models, and 3) the expressive semantics in musical body movements. She endeavors to collaborate and integrate methodologies in diverse disciplines including systematic musicology, music technology, music psychology, biomechanics, and 3-D motion capture technology
  Yu-Chia Kuo is a research assistant at the Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan. Her research areas include Music Information Retrieval and Music Composition. Simultaneously, she works on electronic music and multi- disciplinary projects as a composer, exploring new interpretations with multimedia content. Her research focuses on 1) The analysis and modeling of multi-track music, and 2) Expressive synthesis for string performance. Recent work includes a sound installation for 'The Gravity Realm' at the National Museum of Nature and Science, crafting a sonic scenography to transform the audience's perception of space through sound.
  Ting-Wei Lin is a Ph.D. candidate at the Taiwan International Graduate Program on Social Networks and Human-Centered Computing and the Music and Culture Technology Laboratory, Institute of Information Science, Academia Sinica, Taiwan. His research focuses on 1) Visual-Audio deep learning and machine learning to investigate the correlation between music and facial expressions, and 2)The integration of the virtual musician system to improve music-driven performance generation technologies.
  Hsuan-Kai Kao is currently a research assistant at Music and Culture Technology Lab (MCT Lab), Institute of Information Science, Academia Sinica, Taiwan. His research utilizes deep generative model for the application of multimedia system, especially in the music information retrieval field. His recent research mainly focused on music-to-body movement generation.
  Wei-Yang Lin is a research assistant at Music and Culture Technology Lab, Academia Sinica, Taiwan, with a M.S. degree in data science from National Taiwan University. His research focuses on the customized violin fingering generation system. Additionally, he is interested in diverse music genres, including classical, pop, rock, and hip-hop.

KAIST AI Performance Team

Human-AI Piano Relay Performance (Pf. Jonghwa Park)

'Human-AI Piano Relay Performance' system is an interactive music performance system which enables novel musical performances by creating a duet between a human pianist and an AI counterpart, which is based on the score following algorithm and expressive piano performance generation model (VirtuosoNet). A demo of the system will be presented by Jonghwa Park, a Korean virtuoso pianist. More information about the system can be found here.


Flute Performance Accompanied by AI Pianist with Automatic Cue Detection (Fl. Jaeran Choi)

‘Automatic Musical Cue Detection’ system is a multi-modal based interactive performance system using motion analysis. In ensemble, the beginning of a piece or a fermata section has weak audio information to refer to, so it mainly relies on "cues" using the performer's gestures and breathing. However, in ensembles with an AI pianist, the absence of a human pianist makes it impossible to apply essential visual interactions such as cues. To solve this problem, this system detects the flutist's motion cues via webcam at the beginning and the fermata section and calculates the optimal timing and the automatic piano starts the accompaniment. Jaeran Choi, a master student at Music and Audio Computing Lab, will perform ‘Ennio Moricone: Cinema Paradiso’ with the system. 


Classical Voice with AI Pianist with Adaptive Accompaniment System (Sop. Yoonji Oh)

For an ensemble of a classical singer and an AI pianist with automated operation, this performance incorporates multiple systems including an adaptive piano accompaniment system and real-time lyrics tracking. Yoonji Oh, a Korean soprano, will present 'Heidenröslein' by Schubert with the system.


Real-time Piano Transcription with Visualization (Pf. Jonghwa Park)

Audio-to-score transcription, powered by the autoregressive neural network model will be performed and visualized in realtime. 

Performers
Taegyun Kwon, Jiyun Park, Jaeran Choi, Junhyung Bae, Hyeyoon Cho, Yonghyun Kim, Dasaem Jeong, Juhan Nam (KAIST AI Performance Team)

Taegyun Kwon is Ph.D. Candidate in the Music and Audio Computing Lab(MACLab) at GSCT KAIST. He researched on piano performance analysis, including realtime piano transcription, alignment and expressive generation. In this work, he developed a real-time piano transcription system, performed by virtuoso pianist Jonghwa Park, and the adaptive piano accompaniment system for singing performance, collaborated with soprano Yoonji Oh.
  Jiyun Park is a Ph.D. student in the MACLab at GSCT KAIST. Her research focuses on interactive music performance including real-time music alignment and singing voice. Her work includes developing a Human-AI piano relay performance system, performed by virtuoso pianist Jonghwa Park, and a real-time lyrics tracking system, collaborated with soprano Yoonji Oh.
  Jaeran Choi is a Master Student at GSCT KAIST. Her research interests include Human-AI musical interaction. In particular, she focuses on multimodal musical cue detection and reactive accompaniment systems. She developed flute-AI Pianist ensemble system using automatic cue detection and will present it with her own flute performance.
  Joonhyung Bae, a Korean artist and Ph.D. Candidate at GSCT KAIST. He is currently a Music and Audio Computing Lab member. He is researching sound-based virtual performer visualization for artistic expression using deep learning.~~~
  Hyeyoon Cho is a second year master student at GSCT KAIST. She received a Bachelor's degree in piano performance at University of Texas, Austin and  Master’s in piano performance at Indiana University. Her research interests include quantization in piano performance and music information retrieval.
  Dasaem Jeong, Assistant Professor in the Department of Art & Technology at Sogang University, South Korea. He obtained his Ph.D. in culture technology from KAIST under supervision of Juhan Nam. His research focuses on various music information retrieval tasks, including expressive performance modeling and symbolic music generation. He developed  VirtuosoNet, a system for generating expressive piano performance.
  Yonghyun Kim is currently pursuing a master's degree at GSCT KAIST. His research areas of interest are Music, Artificial Intelligence, and HCI. He is currently focusing on research that combines multimedia (esp. audio and vision) and AI to enrich human musical experience and creation.
  Juhan Nam is an Associate Professor in the Graduate School of Culture Technology at the Korea Advanced Institute of Science and Technology (KAIST) in South Korea. He is the director of the Music and Audio Computing Lab. He is interested in various topics at the interaction of music, audio signal processing, machine learning, and human-computer interaction.

Special Guests

Pianist
Jonghwa Park

Park Jong-hwa is a pianist, professor, and artist with insight into life. The Busan-born pianist, Jong-hwa Park has made a profound impact with his remarkable combination of highest level musicianship and visionary programmes. He has been contemplating the connection between classical music and modern society through programming. He was appointed as a professor at Seoul National University's College of Music in 2007 and has recently continued to participate in issues related to human communication as a convergence knowledge base composition and artificial intelligence project in Academia to cope with the changes and evolution of classical music.

Soprano
Yoonji Oh

Soprano Oh Yun Ji is a Korean lyric-leggiero soprano with a clear and bright timbre, active across various music genres, and an artist who touches the heart with the art of sound.