ACM Multimedia 2018: Call for workshop papers

MMAC: Multi-Modal Affective Computing of Large-Scale Multimedia Data

With the rapid development of digital photography and social networks, people get used to sharing their lives and expressing their opinions online. As a result, user-generated social media data, including text, images, audios, and videos, grow rapidly, which urgently demands advanced techniques on the management, retrieval, and understanding of these data. Most of the existing works on multimedia analysis focused on cognitive content understanding, such as scene understanding, object detection, and recognition. Recently, with a significant demand for emotion representation in artificial intelligence, multimedia affective analysis has attracted increasing research efforts from both academic and industrial research communities.
        Affective computing of the user-generated large-scale multimedia data is rather challenging due to the following reasons. As emotion is a subjective concept, affective analysis involves multidisciplinary understanding of human perceptions and behaviours. Furthermore, emotions are often jointly expressed and perceived through multiple modalities. Multi-modal data fusion and complementation need to be explored. Recent solutions based on deep learning require large-scale data with fine labeling. The development of affective analysis is constrained by the affective gap between low-level affective features and high-level emotions, and the subjectivity of emotion perceptions among different viewers with the influence of social, educational and cultural factors. Recently, great advancements in machine learning and artificial intelligence have made large-scale affective computing of multimedia possible.
        This ACM MM 2018 workshop "MMAC: Multi-Modal Affective Computing of Large-Scale Multimedia Data" is calling for papers reporting the most recent progress on multi-modal affective computing of large-scale multimedia data and its wide applications. It targets a mixed audience of researchers and product developers from several communities, i.e. multimedia, machine learning, psychology, artificial intelligence, etc. The topics of interest include, but are not limited to:
  • Affective content understanding of uni-modal text, images, and speech
  • Emotion based multi-modal summarization of social events
  • Affective tagging, indexing, retrieval and recommendation of social media
  • Human-centered emotion perception prediction in social networks
  • Group emotion clustering and personality inference
  • Psychological perspectives on affective content analysis
  • Weakly-supervised/unsupervised learning for affective computingDeep learning and reinforcement learning for affective computing
  • Fusion methods for multi-modal emotion recognition
  • Benchmark dataset and performance evaluation
  • Affective computing-based applications in entertainment, robotics, advertisement, education, healthcare, and biometrics, etc.
Program (click here)

Speaker: Prof. Jia Jia, Tsinghua University
Title: Mental Health Computing via Harvesting Social Media Data

Abstract.  Psychological stress and depression are threatening people’s health. It is non-trivial to detect stress or depression timely for proactive care. With the popularity of social media, people are used to sharing their daily activities and interacting with friends on social media platforms, making it feasible to leverage online social media data for stress and depression detection. In this talk, we will systematically introduce our work on stress and depression detection employing large-scale benchmark datasets from real-world social media platforms, including 1)  stress-related and depression-related textual, visual and social attributes from various aspects, 2) novel hybrid models for binary stress detection, stress event and subject detection, and cross-domain depression detection, and finally 3) several intriguing phenomena indicating the special online behaviors of stressed as well as depressed people. We would also like to demonstrate our developed mental health care applications at the end of this talk.

Bio. Dr. Jia Jia is an associate professor in Department of Computer Science and Technology, Tsinghua University. Her main research interest is affective computing and human computer speech interaction. She has been awarded ACM Multimedia Grand Challenge Prize (2012) and Scientific Progress Prizes from the National Ministry of Education twice (2009, 2016). She has authored about 70 papers in leading conferences and journals including T-KDE, T-MM, T-MC, T-ASLP, T-AC, ACM Multimedia, AAAI, IJCAI, WWW etc. She also has wide research collaborations with Tencent, SOGOU, Huawei, Siemens, MSRA, Bosch, etc.

Important Dates
  • Submission deadline: July 21, 2018 (deadline extended)
  • Notification of acceptance: August 5, 2018
  • Camera-ready deadline: August 12, 2018
Submission Instructions
Authors should prepare their manuscript according to the Guide for Authors of ACM MM 2018 available at Paper Submission. Please submit your papers at the submission page and select the track "The Joint Workshop of 4th Workshop on Affective Social Multimedia Computing and first Multi-Modal Affective Computing of Large-Scale Multimedia Data Workshop (ASMMC–MMAC 2018)".

Outstanding accepted papers will be encouraged to submit an extended version to an ACM TOMM special issue and an NPL special issue.  You can download the call for papers for TOMM and NPL.

Abstract and Keywords
The abstract and the keywords form the primary source for assigning papers to reviewers. So make sure that they form a concise and complete summary of your paper with sufficient information to let someone who doesn’t read the full paper know what it is about.

Double-Blind Review
MMAC will use a double-blind review process for paper selection. Authors should not provide author names or affiliations in their manuscript.

  • Dr. Sicheng Zhao, University of California Berkeley, USA. E-mail:
  • Prof. Hongxun Yao, Harbin Institute of Technology, China. E-mail:
  • Dr. Min Xu, University of Technology Sydney, Australia. E-mail:
  • Prof. Qingming Huang, University of Chinese Academy of Sciences, China. E-mail:
  • Prof. Björn W. Schuller, Imperial College London, UK. E-mail: