Machine Learning for Music Discovery Workshop

International Conference on Machine Learning (ICML) 2015
Lille, France
11 July 2015

Motivation, impact, and expected outcomes:
In recent years we have witnessed the rapid raise of the RecSys community and the ever-increasing relevance of the field of recommender systems, where great progress has been made towards approaches which rely on user feedback (i.e., collaborative filtering), and produce excellent recommendations. For instance, with rich crowd data we can easily take a set of items (e.g., music tracks) enjoyed by an individual user, and deduce other items that user may enjoy based on other users with similar tastes. 

But this rich data environment is often a fallacy. For instance, how do we recommend a piece of music that has not yet been rated by anyone? How do we define similarity when crowd data is missing? With collaborative filtering methods it is impossible to obtain good quality recommendations without rich user data, and unfortunately it is also often impossible to obtain rich user data without good quality recommendations. This is where sophisticated machine learning systems based on content are necessary to bootstrap quality recommendations. Even from an industry perspective, content-based music recommendation remains very much an open and promising academic research problem. 

Much progress has been made on these problems within the MIR community, by addressing tasks ranging from concrete problem spaces such as musical instrument identification within an ensemble, and rhythmic style identification, to tasks with more abstract definitions such as music genre recognition or musical mood identification. But these problems are still very far from solved, with many researchers citing a ceiling on overall model performance in a continually growing number of tasks. For instance, tasks evaluated in the the Music Information REtrieval eXchange (MIREX) have fallen flat within the past several years. 

Significant opportunities for advancement remain in the area of music discovery and recommendation, and addressing current challenges in the area involves the processing of multimodal data such as semantic and digital audio signals and the design of highly sophisticated machine learning systems. The topics discussed will span a variety of music recommender systems challenges including cross-cultural recommendation, content-based audio processing and representation learning, automatic music tagging, and evaluation.

Important Dates:
Abstracts Deadline:  May 1, 2015 Extended to May 7, 2015
Notification of Acceptance:  May 10, 2015

Submission of Papers:
We request 2-page extended abstracts be submitted by May 1, 2015.  Considered papers must use the ICML template, which can be found here: icml2015stylefiles.  Word templates will not be provided.  Accepted papers will be published online.

Papers should be submitted to the following address:

Topic Areas:
  • Music recommendation and discovery
  • Content-based and multimodal music recommender systems
  • Transfer learning and semi-supervised learning for music discovery
  • Audio and semantic content-based machine learning (e.g., genre, mood, style, rhythm)
  • Browsing and visualization of large music and listener datasets
  • Similarity metric learning
  • Learning to rank
  • Evaluation methodology 

Invited Speakers:
Brian McFee, NYU Center for Data Science
Sander Dieleman, Ghent University
Arthur Flexer, Austrian Research Institute for Artificial Intelligence
Bob L. Sturm, School of Electronic Engineering and Computer Science, C4DM, Queen Mary University

Workshop Organizers:
Erik Schmidt, Pandora
Fabien Gouyon, Pandora
Gert Lanckriet, University of California San Diego

Feel free to reach out with any questions at the following address:

ICML workshop registration will be handled through the main conference:

The workshop will be a full day on 11 July 2015.  The schedule contains 12 talks in total: 6, 30 minute invited talks and 6, 20 minute accepted talks.  The workshop will be followed by a lively happy hour and discussion session.  See program for full details.