ACM Multimedia 2018: Call for workshop papers

MMAC: Multi-Modal Affective Computing of Large-Scale Multimedia Data

With the rapid development of digital photography and social networks, people get used to sharing their lives and expressing their opinions online. As a result, user-generated social media data, including text, images, audios, and videos, grow rapidly, which urgently demands advanced techniques on the management, retrieval, and understanding of these data. Most of the existing works on multimedia analysis focused on cognitive content understanding, such as scene understanding, object detection, and recognition. Recently, with a significant demand for emotion representation in artificial intelligence, multimedia affective analysis has attracted increasing research efforts from both academic and industrial research communities.
        Affective computing of the user-generated large-scale multimedia data is rather challenging due to the following reasons. As emotion is a subjective concept, affective analysis involves multidisciplinary understanding of human perceptions and behaviours. Furthermore, emotions are often jointly expressed and perceived through multiple modalities. Multi-modal data fusion and complementation need to be explored. Recent solutions based on deep learning require large-scale data with fine labeling. The development of affective analysis is constrained by the affective gap between low-level affective features and high-level emotions, and the subjectivity of emotion perceptions among different viewers with the influence of social, educational and cultural factors. Recently, great advancements in machine learning and artificial intelligence have made large-scale affective computing of multimedia possible.
        This ACM MM 2018 workshop "MMAC: Multi-Modal Affective Computing of Large-Scale Multimedia Data" is calling for papers reporting the most recent progress on multi-modal affective computing of large-scale multimedia data and its wide applications. It targets a mixed audience of researchers and product developers from several communities, i.e. multimedia, machine learning, psychology, artificial intelligence, etc. The topics of interest include, but are not limited to:
  • Affective content understanding of uni-modal text, images, and speech
  • Emotion based multi-modal summarization of social events
  • Affective tagging, indexing, retrieval and recommendation of social media
  • Human-centered emotion perception prediction in social networks
  • Group emotion clustering and personality inference
  • Psychological perspectives on affective content analysis
  • Weakly-supervised/unsupervised learning for affective computingDeep learning and reinforcement learning for affective computing
  • Fusion methods for multi-modal emotion recognition
  • Benchmark dataset and performance evaluation
  • Affective computing-based applications in entertainment, robotics, advertisement, education, healthcare, and biometrics, etc.
Important Dates
  • Submission deadline: July 8, 2018
  • Notification of acceptance: August 5, 2018
  • Camera-ready deadline: August 12, 2018
Submission Instructions
Authors should prepare their manuscript according to the Guide for Authors of ACM MM 2018 available at Paper Submission and submit their papers at the submission page (will open later).

Outstanding accepted papers will be encouraged to submit an extended version to an ACM TOMM special issue. You can download the call for papers.

Abstract and Keywords
The abstract and the keywords form the primary source for assigning papers to reviewers. So make sure that they form a concise and complete summary of your paper with sufficient information to let someone who doesn’t read the full paper know what it is about.

Double-Blind Review
MMAC will use a double-blind review process for paper selection. Authors should not provide author names or affiliations in their manuscript.

  • Dr. Sicheng Zhao, University of California Berkeley, USA. E-mail:
  • Prof. Hongxun Yao, Harbin Institute of Technology, China. E-mail:
  • Dr. Min Xu, University of Technology Sydney, Australia. E-mail:
  • Prof. Qingming Huang, University of Chinese Academy of Sciences, China. E-mail:
  • Prof. Björn W. Schuller, Imperial College London, UK. E-mail: