Invited Speakers

Dr. James Pennebaker

Rethinking sentiment analysis: Detecting emotion through function words

Sentiment analysis is most frequently done by counting emotion words or emotion words mixed with words that are emotionally laden. Language did not evolve to express emotion -- it is something that people are able to convey in the ways they talk -- not just the content of their speech. A series of studies are discussed that focus on function words such as pronouns, prepositions, and other common parts of speech. By tracking function word use, we can detect the people's emotional states by determining how they are thinking about their social worlds.

Dr. James Pennebaker is the Regents Centennial Professor and Executive Director of Project 2021 at the Department of Psychology, University of Texas at Austin. He and his students are exploring natural language use, group dynamics, and personality in educational and other real world settings. His earlier work on expressive writing found that physical health and work performance can improve by simple writing and/or talking exercises. His cross-disciplinary research is related to linguistics, clinical and cognitive psychology, communications, medicine, and computer science.

Dr. Dipankar Chakravarti

The Experience and (versus) the Expression of Affect: Challenges for Text Analysis

Dr. Rajesh Bagchi

Title: The Impact of Crowding on Calorie Consumption

We present five studies showing that crowding increases calorie consumption. These effects occur because crowding increases distraction, which hampers cognitive thinking and evokes more affective processing. When consumers process information affectively, they consume more calories. We provide process support and discuss theoretical and managerial implications.

(Work authored by Stefan Hock and Rajesh Bagchi)

Dr. Rajesh Bagchi is a Professor of Marketing & Sorensen Jr. Fellow at Pamplin College of Business, Virginia Tech.

Dr. Jennifer Healey

Creating Emotionally Intelligent Vehicles

In the near future, as vehicles become more and more autonomous, the amount of direct interaction between passengers and vehicle control will diminish. Aside from controlling the vehicle, drivers are often responsive to their passengers’ emotional state. Are their passengers comfortable? Afraid? Excited? Happy? Content? Bored? In response a courteous driver might try to adjust their driving style, make conversation or simply leave their passenger alone. In the case of an extreme emotional event, such as extreme distress, the vehicle would know that it was time to take immediate action and call for help, even if the passenger was unable to do so. Creating emotionally aware vehicles will be the next step in replacing drivers in cars with a truly human level intelligence.

Dr. Healey has been working in the fields of Affective Computing and Intelligent Vehicles for over two decades. She received her Bachelors, Masters and Doctoral degrees from MIT and her work on Affective computing at the MIT Media Lab has generated hundreds of articles in the popular press and dozens of academic publications. Her work is featured in Roz Picard’s seminal book “Affective Computing” and her paper on “Detecting Stress During Real World Driving Tasks using Physiological Sensors” won the 10 year impact award from IEEE journal Transactions on Intelligent Transportation Systems. Her work on the future of autonomous vehicles also was featured in multiple publications and a TED talk with over 800,000 views. She has been an invited speaker at the TED Institute, The Council on Foreign Relations, The International Conference on Affective Computing and Intelligent Interaction, IEEE Wireless Health. She has served for many years on the International Symposium on Wearable Computing and has been the Conference Chair twice. She is an Associate Editor for ACM’s Interactive, Mobile, Wearable and Ubiquitous Technologies, and serves on the program committee and as a reviewer for diverse conferences including NIPS, AAAI, AutomotiveUI, CHI, ISWC and BSN. She holds over 30 patents and has written dozens of papers in IEEE and ACM journals and conferences.

Dr. Bjoern Schuller

All Content about Computational Affective Content Audio-Analysis?

In affective multimedia content analysis, one often has diverse audio cues such as speech, music, or sound available as source of information. Recently, some significant progress in this challenging task led to increased real-world usage and commercial exploitation of according methods. But how reliable does this work these days? And how specific do methods need to be depending on the type of audio, i.e., speech, music, or general sound? In this talk, insight into a range of affective content analysis tasks such as in TV broadcasts, YouTube video blogs, and Voice over IP video chat, are given first focussing on speech analysis. Examples thereby stem from European research projects, but also industrial use-cases. A peak under the hood of latest engines deep learning end-to-end from big(ger) audio data will follow, showing how such approaches can accordingly be used for affective music and sound analysis, or even to go beyond audio as target modality of interest. Likewise, a unified methodology is presented that can be applied independent of the audio type. Transfer learning accross audio manifestation and databases is added as interesting avenue to overcome the field's ever-present bottleneck of sparse data. Future perspectives include dealing with mixed real-world soundscapes and potential avenues and architectures to best deal with such.

Dr Björn Schuller is a Full Professor and Chair of Embedded Intelligence for Health Care and Wellbeing at University of Augsburg, Germany, and an Associate Professor (Reader) and Head of GLAM - the Group on Language Audio & Music at Imperial College London, UK, as well as a CEO of audEERING - an Audio Intelligence company.

Dr. Cristian Danescu-Niculescu-Mizil

Conversational markers of social dynamics

Abstract: Can conversational dynamics---the nature of the back and forth between people---predict the outcomes of social interactions? In this talk I will introduce a computational framework for modeling conversational dynamics and for extracting the social signals they encode, and apply it in a variety of different settings. First, I will show how these signals can be predictive of the future evolution of a dyadic relationship. In particular, I will characterize friendships that are unlikely to last and examine temporal patterns that foretell betrayal in the context of the Diplomacy strategy game. Second, I will discuss conversational patterns that emerge in problem-solving group discussions, and show how these patterns can be indicative of how (in)effective the collaboration is. I will conclude by focusing on the effects of under and over-confidence on the dynamics and outcomes of decision-making discussions.

(This talk includes joint work with Jordan Boyd-Graber, Liye Fu, Dan Jurafsky, Srijan Kumar, Lillian Lee, Jure Leskovec, Vlad Niculae, Chris Potts, Arthur Spirling and Justine Zhang.)

Dr. Cristian Danescu-Niculescu-Mizil is an Assistant Professor in Department of Information Science, Cornell University.