The AAAI-18 Workshop on Affective Content Analysis

(AffCon2018)

News

  • The schedule is up! (1.15.2018)
  • Invited speakers and abstracts are up! (1.15.2018)
  • Registration Open! Early registration closes Dec 8, 2017
  • List of accepted papers is up! (11.19.2017)
  • UPDATED Submission Deadlines:
    • October 20, 2017: Abstract submission (optional)
    • October 30, 2017: Full paper submission
    • November 10, 2017: Extended submission deadline


Introduction

Affect analysis refers to the set of techniques which identify and measure the ‘experience of an emotion’. This workshop focuses on analyzing affect in content including text, audio, images, and videos. The word ‘affective’ is used to refer to emotion, sentiment, personality, mood, and attitudes including subjective evaluations, opinions, and speculations. All methods and models that measure affective responses to content are in the scope of the workshop.

Work on affect analysis in language and text spans many research communities, including computational linguistics, consumer psychology, human-computer interaction (HCI), marketing science and cognitive science. Computational linguists study how language evokes as well as expresses emotion. Consumer psychology examines human affect by drawing upon grounded psychological theories of human behavior. The HCI community studies human responses as a part of user experience evaluation. This workshop aims at bringing together researchers from multiple disciplines for stimulating discussions on the open research problems in affect analysis, with an emphasis on language and text.

Computational models for consumer psychology theories present a huge opportunity to guide the construction of intelligent systems that understand human reactions, and tools from linguistics and machine learning can provide attractive methods to fulfil those opportunities. Models of affect have recently been adapted for social media platforms, enabling new approaches to understanding users’ opinions, intentions and expressions. However, the exponentially increasing size and the dynamic, multimedia nature of this data make it difficult to detect and measure affect. Furthermore, the subjective nature of human affect suggests the need to measure in ways that recognize multiple interpretations of human responses. A few key challenges are:

  • Standardizing the measurement of affect in order to meaningfully compare different affective models against each other
  • Addressing the challenges in cross-media, cross-domain and cross-platform affect analysis
  • Identifying consumer psychology theories and behaviors related to affect, which are amenable to computational modeling
  • Building language-based affect models as input for other data science applications

The AI community is well-poised to propose new solutions, approaches and frameworks to tackle these and other challenges. This workshop invites papers that address these and other topics, propose novel solutions for well-established problems, offer modeling and measurement of affect, and identify the best affect–related dimensions to study consumer behavior. Potential examples include deep learning for affect analysis, leveraging traditional affective computing algorithms (that are built on multi–modal data and sensors) for text and so on. Another area of focus for this workshop is the need of standardized baselines, datasets, and evaluation metrics. Hence, papers describing novel language resources, evaluation metrics and standards for affect analysis and understanding are also invited.

Workshop Program

FORMAT

  • Invited Talks in marketing, linguistics and cognition by Dr. Dipankar Chakravarti, Dr. Rajesh Bagchi, Dr. Jonah Berger, Dr. James Pennebaker, Dr. Cristian Danescu-Niculescu-Mizil and Dr. Jennifer Healey.
  • Paper, poster and dataset presentations
  • Interactive poster session and panel discussion

Contact Us

If you have any questions regarding the workshop scope or need further information, please do not hesitate to send an e-mail:

nchhaya [AT] adobe.com

jaidka [AT] sas.upenn.edu

Thank you!