Machine Learning for Music Discovery Workshop
International Conference on Machine Learning (ICML) 2016
New York City, NY, USA
23 June 2016
(note date change)
Motivation, impact, and expected outcomes:
In recent years we have witnessed the rapid raise of the RecSys community and the ever-increasing relevance of the field of recommender systems, where great progress has been made towards approaches which rely on user feedback (i.e., collaborative filtering), and produce excellent recommendations. For instance, with rich crowd data we can easily take a set of items (e.g., music tracks) enjoyed by an individual user, and deduce other items that user may enjoy based on other users with similar tastes.
But this rich data environment is often a fallacy. For instance, how do we recommend a piece of music that has not yet been rated by anyone? How do we define similarity when crowd data is missing? With collaborative filtering methods it is impossible to obtain good quality recommendations without rich user data, and unfortunately it is also often impossible to obtain rich user data without good quality recommendations. This is where sophisticated machine learning systems based on content are necessary to bootstrap quality recommendations. Even from an industry perspective, content-based music recommendation remains very much an open and promising academic research problem.
Much progress has been made on these problems within the MIR community, by addressing tasks ranging from concrete problem spaces such as musical instrument identification within an ensemble, and rhythmic style identification, to tasks with more abstract definitions such as music genre recognition or musical mood identification. But these problems are still very far from solved, with many researchers citing a ceiling on overall model performance in a continually growing number of tasks. For instance, tasks evaluated in the the Music Information REtrieval eXchange (MIREX) have fallen flat within the past several years.
Significant opportunities for advancement remain in the area of music discovery and recommendation, and addressing current challenges in the area involves the processing of multimodal data such as semantic and digital audio signals and the design of highly sophisticated machine learning systems. The topics discussed will span a variety of music recommender systems challenges including cross-cultural recommendation, content-based audio processing and representation learning, automatic music tagging, and evaluation.
Notification of acceptance:
Submission of Papers:
We request 2-page extended abstracts be submitted by May 1st, 2016. An additional page third page for references only may be included. Considered papers must use the ICML template, which can be found here: icml2016stylefiles. Word templates will not be provided. The workshop reviewing process will be single-blind and we kindly ask that submitted abstracts use the [accepted] setting within the icml2016 latex package. Accepted papers will be published online.
Papers should be submitted to the following address: firstname.lastname@example.org
Douglas Eck, Google Brain (Keynote)
Eric Humphrey, Spotify
Dawen Liang, Columbia University
Joshua Moore, Cornell University
Colin Raffel, Columbia University
Justin Salamon, New York University
Erik Schmidt, Pandora
Fabien Gouyon, Pandora
Oriol Nieto, Pandora
Gert Lanckriet, Amazon/University of California San Diego
Feel free to reach out with any questions at the following address:
ICML workshop registration will be handled through the main conference:http://icml.cc/2016/http://icml.cc/2016/
The workshop will be a full day on 24 June 2016. The final schedule will contain approximately 13 talks in total: 1, 50 minute keynote; 6, 20 minute invited talks; and 6, 20 minute accepted talks. Number of talks and durations are subject to change as we compile the final program. The workshop will be followed by a lively Pandora-sponsored happy hour and discussion session.