22 Feb, 2019

W2-1 International Center (Multi-Purpose Hall), KAIST


KAISTxSNU Music and Audio Workshop is a research exchange event between KAIST Music and Audio Computing Lab (MACLab) and SNU Music and Audio Research Group (MARG) to share ideas and promote discussions in the topics of music information retrieval, audio signal processing, music/audio AI and other computer-based research in music and audio.

This workshop is open to the public. In order to attend the workshop, please make a reservation by clicking the registration (워크샵 등록) button below (we limit the maximum number of the open registrations to 30 persons).

NOTE: The oral sessions are delivered in English but, in the poster session, you are free to speak either in Korean or English.

OPENING REMARKS

  • 12:50 - 13:00: Prof. Juhan Nam (남주한) (MACLab, KAIST)


Oral Presentation: Invited TalkS (13:00-14:00, in English)

  • 13:00 - 13:30: "Creating ears for AI: Machine Listening" | Yoonchang Han (한윤창) (Co-founder & CEO, Cochlear.ai)
  • 13:30 - 14:00: "Audio Analysis and Music Information Retrieval at MTG" | Dmitry Bogdanov (MTG, UPF), Andres Ferraro (MTG, UPF)

Oral Presentation: Student TalkS (14:00-15:00, in English)

  • 14:00 - 14:15 : "Generating Piano Performance from Music Score" | Dasaem Jeong (정다샘) (MACLab, KAIST)
  • 14:15 - 14:30 : "Phase-Aware Speech Enhancement with Deep Complex U-Net" | Hyeongseok Choi (최형석) (MARG, SNU)
  • 14:30 - 14:45 : "Generative Models for Singing Voice Synthesis" | Soonbeom Choi (최순범), (MACLab, KAIST)
  • 14:45 - 15:00 : "Neural K-POP Star: Singing Voice Synthesis System Using Autoregressive Neural Networks" | Juheon Lee (이주헌) (MARG, SNU)

Poster PresentationS (15:10-17:40)

[ MACLab, KAIST ]

  • "Joint Detection and Classification of Singing Voice Melody Using Convolutional Recurrent Neural Networks" | Sangeun Keum (금상은)
  • "Multiple Instrument Recognition Using Multi-band ShuffleNet" | Sangeon Yong (용상언)
  • "A Symbolic Melody Dataset for Music Plagiarism" | Saebyul Park (박새별)
  • "Zero-Shot Learning for Music Annotation and Retrieval" | Jung Choi (최정)
  • "Deep Content-User Embedding Model for Music Recommendation" | Jongpil Lee (이종필)
  • "Crowdsourced AI DJ Dataset" | Minsuk Choi (최민석)
  • "Semantic Analysis of Singing Voice Tags" | Keunhyeong Kim (김근형)
  • "Generating Piano Performance from Music Score" | Dasaem Jeong (정다샘), Taegyun Kwon (권태균)

[ MARG, SNU ]

  • "Enhancing Music Features by Knowledge Transfer from User-item Log Data" | Donmoon Lee (이돈문)
  • "Digital Watermarking for Audio Datasets Aimed at Time-frequency Representation Based Deep Learning Models" | Wansoo Kim (김완수)
  • "Sound Event Classification in Real World" | Jaejun Lee (이재준)
  • "Conditional Generation of Expressive Piano Performance with Variational Autoencoder" | Seung-yeon Rhyu (유승연)
  • "Listen to Dance: Music Driven Choreography Generation Using Autoregressive Encoder Decoder Network" | Seohyun Kim (김서현)
  • "Query-based Source Separation" | Jiehwan Lee (이지환)
  • "Automatic Evaluation of Piano Performance Using Dynamic Time Warping" | Sarah Kim (김사라)
  • "Sequential Skip Prediction with Few-shot in Streamed Music Contents" | Sungkyun Chang (장성균)
  • "Single trial Auditory-related Evoked Potential from EEG using Deep learning" | Myeonghoon Ryu (류명훈)
  • "Multi-source DOA Estimation Using Deep Learning" | Gwangseok An (안광석)

[ MTG, UPF ]

  • "Essentia: Open-source library and tools for audio and music analysis, description, and synthesis"| Dmitry Bogdanov

Dinner + Party

Location:

Contact

  • Juhan Nam (남주한): juhannam@kaist.ac.kr

Participants

Organization Team

  • KAIST: Juhan Nam, Taegyun Kwon, Soonbeom Choi, Sangeon Yong (webpage), Taewan Kim (design)
  • SNU: Kyogu Lee, Donmoon Lee, Jaejun Lee