The AAAI-20 Workshop On Affective Content Analysis

AFFCON2020: Interactive Affective Response

February 7, 2020

Room Details: Bryant, 2nd Floor

Hilton New York Midtown

New York, USA

Invited Speakers

Using Big Data to Understand and Improve Online Support Groups

Robert Kraut, Human Computer Interaction Institute, CMU

Many people with serious diseases use online groups to obtain informational and emotional support and to understand what to expect as they live with their disease. The benefits from online support groups are likely to accrue primarily to people who participate in them and communicate with others. But in a wide variety of online groups, a substantial number of participants drop out before they could plausibly receive any benefits themselves or provide benefits to others. Because people join these groups to get support, the amount and type of support they receive is likely to determine whether they find the groups valuable and continue to participate in them.

This talk will describe machine learning techniques to automatically identify the nature of communication exchanged in online support groups. Are members asking for informational or emotional support, self-disclosing their fears or events from their cancer journeys, providing support, welcoming newcomers or something else? The automated analysis is used to understand how the language in these sites influences how long people stay, the support they receive and their satisfaction with it. It also forms the basis of interventions to better match support providers with support recipients.

Multimodal AI: Understanding Human Communication Dynamics

Louis-Philippe Morency, Language Technologies Institute, CMU

Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today's computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, new research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements modeling multiple aspects of human communication dynamics, motivated by applications in healthcare (depression, PTSD, suicide, autism), education (learning analytics), business (negotiation, interpersonal skills) and social multimedia (opinion mining, social influence).


Building Intelligent and Visceral Machines

Daniel McDuff, Microsoft

Humans have evolved to have highly adaptive behaviors that help us survive and thrive, affect plays an important role in this adaptation. AI can help us advance the fundamental understanding of human behavior and emotions, build smarter technology, and ultimately help people. In this talk, I will present novel methods for physiological and behavioral measurement via ubiquitous hardware. Then I will present state-of-the-art approaches for synthesis of these signals. I will show examples of new human-computer interfaces and autonomous systems that leverage behavioral and physiological models, including affect-aware natural language conversation systems, cross-domain learning systems and vehicles with intrinsic emotional drives. This technology presents many opportunities for building natural user interfaces and more intelligent machines; however, it also raises questions about the ethics of designing systems that measure and leverage highly personal data. Throughout the talk I will comment on many of these questions and propose design principals to help address them.

Invited Talk: Reinforcement learning from affective cues in dialog

Natasha Jaques, Google Brain

Since humans are the ultimate authority on what constitutes a good conversation, we would like to be able to train conversational AI models with human feedback. Yet asking people to manually label good performance is cumbersome and time-consuming. A more scaleable approach is to learn from the implicit, affective cues already present in the conversation. We leverage transfer learning to fine-tune a pre-trained dialog model with human feedback using reinforcement learning, and show that learning from cues like a user's sentiment is more effective than relying on manual labels. However, learning online from human interaction presents the risk of learning inappropriate behavior. To be safe, the policy must be learned offline, and tested before being re-deployed to interact with humans. This means we must apply reinforcement learning to a fixed batch of off-policy data without the ability to explore online in the environment. We develop several techniques that enable learning effectively in this challenging setting. We apply these techniques to learn novel conversational rewards, including reducing the toxicity of language generated by the model. Experiments deploying these models to interact with humans reveal that learning from implicit, affective signals is an effective way to improve conversation quality.

Reifying the Possibility Space of IoT Automation Practices: A Machine Learning Approach

Donna Hoffman and Tom Novak, George Washington University

The consumer IoT connects everyday objects to the Internet. The fundamental value proposition of the IoT is that consumers should be able to connect every smart object to every other smart object and digital service to better automate their lives. In a world where anything can be connected to anything else, it is important to understand what physical and digital services and products consumers actually connect together and to what end (Tibbets 2018).

One prominent example of a company that enables consumers to connect any object to any other object is the web service IFTTT (“If This Then That”). IFTTT, with 14 million users, allows consumers to build if-then rules, called applets, to connect hundreds of different devices and services together, regardless of whether they were originally designed to be connected. IFTTT applets are assemblages that automate the capacity of one component to affect (“if this” trigger) with the capacity of another component to be affected (“then that” action).

Between 2011-2016, IFTTT users created, or realized, 20,675 unique IFTTT automation applet assemblages. Each of the realized assemblages emerged as one of a population from an underlying topological possibility space (DeLanda 2006). The 20,675 realized applets are only 2.26% of 916,250 possible applets that could have been created. They allow us to empirically study the shape of both the realized and full possibility space, as well as identify the underlying points of attraction that guide the recurrent processes by which automation practice assemblages are formed and structured (DeLanda 2006, 2016).

Since assemblage theory has mathematical and topological underpinnings (DeLanda 2002, 2011) we can operationalize key concepts in a way that supports data-based empirical and computational analysis. We first use word embeddings (Mikolov, Chen, Corrado, and Dean 2013; Mikolov, et.al. 2013a; 2013b) to understand the “latent language” of automation assemblages and translate English-language words into high dimensional vector representations using word2vec. We then use HDBSCAN, a recent density-based clustering approach (Campello, Moulavi, and Sander 2013; McInnes, Heraly and Astels, 2017) to identify territorialized automation practices. To visualize the manifold of the possibility space in which these automation practices are embedded, we use UMAP (McInnes, Healy, and Melville, 2018) a new technique for nonlinear dimensionality reduction and manifold learning.

Using these machine learning methods, our computational approach allows us to operationalize and visualize an assemblage theory interpretation of the emergence of automation practices in the Internet of Things (IoT). We build a concrete representation of the possibility space of automation assemblages that reveals the boundaries of territorialized automation practices, and use this representation as a basis for qualitative analysis, theory development, and estimates of future growth.