Excited to announce MuSe2021!
The MuSe challenge and associated workshop continues to push the boundaries of integrated audio-visual and textual based sentiment analysis and emotion sensing. In its first edition, we posed the problem of the prediction of continuous valued dimensional affect and the novel dimension of trustworthiness, detecting 10-class domain-specific topics as the target of discrete emotion classes on an extremely large and natural set of user-generated data.
MuSe’s goal is to bring machine learning researchers from signal-oriented audiovisual, and symbolic natural language processing together, as well as researchers specifically in the realm of affect/emotion and sentiment.
The call for participation and papers attracted registrations of 21 teams from Asia, Europe, and North America. The programme committee accepted 5 papers after double-blind peer reviewing. Besides the papers, the insightful keynote and regular talks will guide us to a better understanding of the state of the field, future directions, and the challenges of bringing technology to fruition:
Vehicle Interiors as Sensate Environments, Dr. Michael Würtenberger
(who is currently Vice President at BMW Research, Innovations, New Technology, "Project AI", Germany)
Personalized Machine Learning for Human-centered Machine Intelligence, Dr. Oggi Rudovic
(who is currently Marie Curie Fellow at the MIT Media Lab and at Apple Inc.)
Furthermore, we are also pleased to feature not one but three invited speakers, in a series of inspiring talks on the subject: Multimodal Social Media Mining (Dr. Yiannis Kompatsiaris, CERTH-ITI), End2You: Multimodal Profiling by End-to-End Learning and Applications (Panagiotis Tzirakis, Imperial College London), Extending Multimodal Emotion Recognition with Biological Signals: Presenting a Novel Dataset and Recent Findings (Alice Baird, University of Augsburg).
Photo by BMW