Invited Speakers

We have a fantastic lineup of keynote and invited speakers from industry and academia. More details will come closer to the date of the workshop. Stay tuned!

Keynote


Yunyao Li (IBM): Taming the Wild West of Natural Language Processing

Natural language processing (NLP) is becoming increasingly adapted in the real-world. To many, NLP is the new resource of growth and wealth. However, the NLP landscape is like the Wild West now: many and growing numbers of players, fast innovations, and limited oversight. In this talk, I will discuss the major challenges in taming the Wild West of NLP. I will present our work in recent years in in addressing these challenges. I will showcase some of the work in concrete domains (e.g. compliance). I will also share thoughts on a general approach towards adapting NLP to solve real-world problems.

Bio: Yunyao Li is a Distinguished Research Staff Member and Senior Research Manager at IBM Research - Almaden where she manages the Scalable Knowledge Intelligence department. She is particularly known for her work in scalable NLP, enterprise search, and database usability. She has built systems, developed solutions, and delivered core technologies to over 20 IBM products under brands such as Watson, InfoSphere, and Cognos. She has published over 80 articles and a book. She is a IBM Master Inventor, with nearly 50 patents filed/granted. Her technical contributions have been recognized by prestigious awards within and outside of IBM on regular basis. She is an ACM Distinguished Member. She was a member of the inaugural New Voices program of the American National Academies (1 out of 18 selected nationwide) and represented US young scientists at World Laureates Forum Young Scientists Forum in 2019 (1 of 4 selected nationwide). Yunyao obtained her Ph.D degree in Computer Science & Engineering and dual-degrees of M.S.E in Computer Science & Engineering and M.S in Information from the University of Michigan. She went to college at Tsinghua University, Beijing, China, and graduated with dual-degrees of B.E in Automation and B.S in Economics.


Invited Speakers

Longqi Yang (Microsoft): Towards Goal-directed Content Recommendation

People's content choices (e.g., Podcast, music, etc.) are driven by their short-term intentions and long-term goals, which are often underserved by today’s recommendation systems. This is mainly due to the fact that higher-ordered goals are often unobserved, and recommenders are typically trained to promote popular items and to reinforce users’ historical behavior. As a result, the utility and user experience of content consumption can be affected undesirably. This talk will cover behavioral experiments that quantify the effects of goal-agnostic recommenders and algorithmic techniques to improve them.

Bio: Longqi Yang is a Senior Applied Researcher at Microsoft’s Office of Applied Research. His research centers around interactive machine learning systems (e.g., recommender systems) and their applications on future of work, productivity, and wellbeing. He received his Ph.D. in Computer Science from Cornell University. More details can be found on his website: https://ylongqi.com

Markus Schedl (JKU): Using NLP for emotion-aware music exploration, lyrics and playlist analysis

In this talk, I will showcase the use of NLP techniques for several music-related tasks, which are carried out at the Institute of Computational Perception of the Johannes Kepler University Linz. More precisely, I will briefly introduce our latest research on lyrics analysis, text-based playlist clustering, and emotion-aware music exploration and recommendation.

I will report findings of our studies on genre and temporal differences of song lyrics, and on uncovering the extent to which the sequential ordering of tracks in user-generated playlists matters for different playlist types identified by their title. Furthermore, I will briefly introduce EmoMTB, our emotion-aware music exploration and recommendation interface which adopts emotion recognition techniques from user-generated texts.

Bio: Markus Schedl is a full professor at the Johannes Kepler University Linz (JKU), affiliated with the Institute of Computational Perception, leading the Multimedia Mining and Search group. In addition, he is head of the Human-centered AI group at the Linz Institute of Technology (LIT) AI Lab. He graduated in Computer Science from the Vienna University of Technology and earned his Ph.D. in Computer Science from the Johannes Kepler University Linz. Markus further studied International Business Administration at the Vienna University of Economics and Business Administration as well as at the Handelshögskolan of the University of Gothenburg, which led to a Master's degree. His main research interests include recommender systems, user modeling, information retrieval, machine learning, natural language processing, multimedia, data analysis, and web mining. He (co-)authored more than 200 refereed conference papers and journal articles, among others, published in ACM Multimedia, RecSys, SIGIR, and ISMIR; Journal of Machine Learning Research, ACM Transactions on Information Systems, IEEE Transactions on Affective Computing, User Modeling and User-Adapted Interaction (UMUAI), and PLOS ONE. Furthermore, he served as a program co-chair for the International Society for Music Information Retrieval (ISMIR) conference in 2020.


Juhan Nam (KAIST): Music Auto-Tagging: from Audio Classification to Word Embedding

Music auto-tagging is one of the main audio classification tasks in the field of music information retrieval. Leveraging the advances of deep learning, particularly, convolutional neural networks for image classification, researchers have proposed novel neural network architectures for music to improve the annotation and retrieval performances. However, this classification approach has the limitation that the model can handle only a fixed set of labels that describe music and does not consider the semantic correlations between the labels. Recent approaches have addressed the issues by associating audio embedding with word embedding where labels are located in a vector space. This allowed the model to predict unseen labels in the training stage from music or retrieve music from any word query. This talk reviews the advance of music auto-tagging where research interests are moving toward combination with natural language processing techniques.

Bio: Juhan Nam is an Associate Professor at the Korea Advanced Institute of Science and Technology (KAIST). He is leading the Music and Audio Computing Lab, a music research group working on various topics with the aim to improve ways people enjoy, play, and make music through technology. More details can be found on his website: https://mac.kaist.ac.kr/~juhan/

Anna Huang (Google Magenta): Tuning Music Transformer

Music Transformer is an expressive language model for music, offering exciting potential for creative exploration. In the AI Song Contest, we see artists obtain a range of compelling results, by feeding it different musical fragments to elaborate. However, finding something novel and appropriate could take many iterations. If there's more control, then it could be possible to steer the exploration process. In this talk, I'll discuss preliminary work in taking both ML and HCI approaches to "tuning" Music Transformer towards users' creative goals, and also a common framework for evaluating progress in generative models and interfaces.


Bio: Anna Huang is a Research Scientist at Google Brain, working on the Magenta project. She is also an Adjunct Professor at Mila / Université de Montréal. Her research focuses on designing generative models and interfaces to support music making and more generally the creative process. Her work is at the intersection of machine learning, human-computer interaction, and music. She is the creator of the ML model Coconet that powered Google’s first AI Doodle, the Bach Doodle, which in two days harmonized 55 million melodies from users around the world. She is a judge and organizer for the AI Song Contest, and also a guest editor for TISMIR.