In its third iteration, MuSe continues to foster new developments in the broad area of multimodal sentiment and emotion recognition, this year by calling for participation in three subchallenges: humour recognition in press conference recordings, multi-class emotion regression for user-generated emotional reaction videos and dimensional emotion recognition in a stress-inducing setting.
Overall, 41 academic teams from 31 different institutions and 16 countries registered for participation in the challenge. 16 of them actually submitted predictions for at least one of the subchallenges. In each subchallenge, the official baseline was surpassed. 14 papers, 10 of which described systems used in the challenge were accepted following a double-blind reviewing process.
We would like to thank our keynote speakers for providing their insightful talks. The keynotes will deepen our understanding of state-of-the-art multimodal affect analysis and point to future perspectives of the field:
Uncovering the Nuanced Structure of Expressive Behavior Across Modalities, Dr. Alan Cowen (Hume AI)
The Dos and Don’ts of Affect Analysis, Dr. Shahin Amiriparian (Chair of Embedded Intelligence in Health Care and Wellbeing, University of Augsburg, Germany)