Program Summary

  • 9:00 - 10:00am: Welcome and Keynote
  • 10:00 - 11:00am: Poster presentations
  • 11:00 - 12:20pm: Oral presentations
  • 2:00 - 3:00pm: Oral presentations
  • 3:00 - 4:30pm: Poster presentations
  • 4:30 - 5:50pm: Oral presentations
  • 6:00pm: Happy hour

9:00 - 10:00am: Keynote

From Listening to Watching, A Recommender Systems Perspective

Yves Raimond, Netflix


In this talk, I'll be discussing a few key differences between recommending music and recommending movies or TV shows, and how these differences can lead to vastly different designs, approaches, and algorithms to find the best possible recommendation for a user. On the other hand, I'll also discuss some common challenges and some of our recent research on these topics, such as better understanding the impact of a recommendation, enable better offline metrics, or optimizing for longer-term outcomes. Most importantly, I'll try to leave a lot of time for questions and discussions.

10:00 - 11:00am: Poster presentations

11:00 - 12:20pm: Oral presentations (20 min each)

Making Efficient use of Musical Annotations

Brian McFee (Invited Speaker)New York University[Video]

Two-level Explanations in Music Emotion Recognition

Verena Haunschmid1, Shreyan Chowdhury1 and Gerhard Widmer1,2Johannes Kepler University Linz1 and Linz Institute of Technology (LIT)2[Video]

Characterizing Musical Correlates of Large-Scale Discovery Behavior

Blair Kaneshiro (Invited Speaker)Stanford University[Video]

NPR: Neural Personalised Ranking for Song Selection

Mark Levy, Matthias Mauch, Jan Van Balen, Bruno Di Giorgi and Dan CartoonApple[Video]

2:00 - 3:00pm: Oral presentations (20 min each)

Personalization at Amazon Music

Kat Ellis (Invited Speaker)Amazon[Video]

A Model-Driven Exploration of Accent Within the Amateur Singing Voice

Camille Noufi, Vidya Rangasayee, Sarah Ciresi, Jonathan Berger and Blair KaneshiroStanford University[Video]

What’s Broken in Music Informatics Research? Three Uncomfortable Statements

Justin Salamon (Invited Speaker)Adobe Research[Video]

3:00 - 4:30pm: Poster presentations

4:30 - 5:50pm: Oral presentations (20 min each)

User-curated shaping of expressive performances

Zhengshan Shi (Invited Speaker)Stanford University[Video]

Interactive Neural Audio Synthesis

Lamtharn Hantrakul1, Adam Roberts1, Chenjie Gu1 and Jesse Engel2Google Brain1 and Google DeepMind2[Video]

Visualizing and Understanding Self-attention based Music Tagging

Minz Won1, Sanghyuk Chun2, and Xavier Serra1Universitat Pompeu Fabra1, Naver Corp.2[Video]

A CycleGAN for style transfer between drum & bass subgenres

Len Vande Veire, Tijl De Bie and Joni DambreGhent University[Video]

Poster presentations

A Hybrid Approach to Audio-to-Score Alignment

Ruchit Agrawal and Simon DixonQueen Mary University of London

The MTG-Jamendo Dataset for Automatic Music Tagging

Dmitry Bogdanov, Minz Won, Philip Tovstogan, Alastair Porter and Xavier SerraUniversitat Pompeu Fabra

Zero-shot Learning and Knowledge Transfer in Music Classification and Tagging

Jeong Choi1, Jongpil Lee1, Jiyoung Park2 and Juhan Nam1 Korea Advanced Institute of Science and Technology (KAIST)1 and NAVER Corp.2

MsE-CNN: Multi-Scale Embedded CNN for Music Tagging

Nima Hamidi Ghalehjegh, Mohsen Vahidzadeh and Stephen BaekThe University of Iowa

A Comparison of Music Input Domains for Self-Supervised Feature Learning

Siddharth Gururani1, Alexander Lerch1 and Mason Bretan2Georgia Tech1 and Samsung Research America2

Modeling Self-Repetition in Music Generation using Generative Adversarial Networks

Harsh Jhamtani1 and Taylor Berg-Kirkpatrick1,2Carnegie Mellon University1 and University of California San Diego2

Representing Music Structure by Variational Attention

Junyan Jiang1,2, Gus Xia2 and Roger Dannenberg1 Carnegie Mellon University1 and New York University Shanghai2

Modelling Interval Relations in Neural Music Language Models

Radha Manisha Kopparti and Tillman WeydeCity University of London

Exploiting repetitions in music with dynamic evaluation

Ben Krause, Emmanuel Kahembwe, Iain Murray and Steve RenalsUniversity of Edinburgh

Representation Learning of Music Using Artist, Album, and Track information

Jongpil Lee1, Jiyoung Park2 and Juhan Nam1 Korea Advanced Institute of Science and Technology (KAIST)1 and NAVER Corp.2

Latent Space Regularization for Explicit Control of Musical Attributes

Kumar Ashis Pati and Alexander LerchGeorgia Institute of Technology

Scaling Up Music Tagging with Transfer Learning and Active Learning

Fedor Zhdanov1, Emanuele Coviello2 and Ben London2Amazon Research1 and Amazon Music2

6:00pm: Happy hour