Machine Learning for Music Discovery Workshop

International Conference on Machine Learning (ICML) 2017
Sydney, Australia
11 August 2017

Motivation, impact, and expected outcomes:
In recent years we have witnessed the rapid raise of the RecSys community and the ever-increasing relevance of the field of recommender systems, where great progress has been made towards approaches which rely on user feedback (i.e., collaborative filtering), and produce excellent recommendations. For instance, with rich crowd data we can easily take a set of items (e.g., music tracks) enjoyed by an individual user, and deduce other items that user may enjoy based on other users with similar tastes. 

But this rich data environment is often a fallacy. For instance, how do we recommend a piece of music that has not yet been rated by anyone? How do we define similarity when crowd data is missing? With collaborative filtering methods it is impossible to obtain good quality recommendations without rich user data, and unfortunately it is also often impossible to obtain rich user data without good quality recommendations. This is where sophisticated machine learning systems based on content are necessary to bootstrap quality recommendations. Even from an industry perspective, content-based music recommendation remains very much an open and promising academic research problem. 

Much progress has been made on these problems within the MIR community, by addressing tasks ranging from concrete problem spaces such as musical instrument identification within an ensemble, and rhythmic style identification, to tasks with more abstract definitions such as music genre recognition or musical mood identification. But these problems are still very far from solved, with many researchers citing a ceiling on overall model performance in a continually growing number of tasks. For instance, tasks evaluated in the the Music Information REtrieval eXchange (MIREX) have fallen flat within the past several years. 

Significant opportunities for advancement remain in the area of music discovery and recommendation, and addressing current challenges in the area involves the processing of multimodal data such as semantic and digital audio signals and the design of highly sophisticated machine learning systems. The topics discussed will span a variety of music recommender systems challenges including cross-cultural recommendation, content-based audio  processing and representation learning, automatic music tagging, and evaluation.

Previous Workshop Instances:
ICML 2016, New York, NY, USA:
ICML 2015, Lille, France:

Important Dates:
Abstracts deadline: June 9, 2017 
Notification of acceptance:  June 12, 2017
Camera-ready deadline:  August 1, 2017

Submission of Papers:
We request 2-page extended abstracts be submitted by June 9, 2017.  An additional page third page for references only may be included.   Considered papers must use the ICML template, which can be found here: icml2017.tgz.  Word templates will not be provided.  The workshop reviewing process will be single-blind and we kindly ask that submitted abstracts use the [accepted] setting within the icml2017 latex package.  Accepted papers will be published online.

Papers should be submitted to the following address: 

Topic Areas:
Music recommendation and discovery

Invited Speakers:
Slim Essid, TELECOM ParisTech
Satoru Fukayama, National Institute of Advanced Industrial Science and Technology (AIST)
Katherine M. Kinnaird, Brown University
Aparna Kumar, Spotify 
Cinjon Resnick, Google Brain
Yi-Hsuan Yang, Academia Sinica

Workshop Organizers:
Erik Schmidt, Pandora
Oriol Nieto, Pandora
Fabien Gouyon, Pandora
Gert Lanckriet, Amazon/University of California San Diego

Feel free to reach out with any questions at the following address:

ICML workshop registration will be handled through the main conference:

The workshop will be a full day on 10 or 11 August 2017.  The final schedule will contain approximately 12 talks in total:  6, 20 minute invited talks; and 6, 20 minute accepted talks.  Number of talks and durations are subject to change as we compile the final program.