IEEE VR 2024 2nd Annual Workshop on Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality (MASSXR)

March 16-17, 2024 (Orlando, FL, USA)

NEWS

Introduction

We are excited to announce the 2nd Annual Workshop on Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality. This year, we explore deeper into the intersection of immersive technologies and social-affective computing, with a focus on the advances and challenges in creating truly user-aware interaction systems within XR environments.

As the field continues to advance, we focus on key research questions and emerging topics such as:

Our objective remains to bring together a diverse group of researchers and practitioners from fields like AI, 3D computer vision/graphics, computer animation, and social-affective computing. We aim to discuss the current state, future directions, challenges, and opportunities in developing immersive embodied intelligence. This year, we particularly emphasize the integration of recent AI advancements with XR technologies to forge more immersive, responsive, and human-centric experiences. We encourage innovative perspectives, experimental results, and theoretical advancements in these areas. We believe that the workshop will continue providing collaboration opportunities for researchers and set the stage for future innovations in social XR

We look forward to your contributions and to another year of insightful discussions.

Location and date

Workshop format

The workshop will have three keynote speakers, a few research papers, and a panel discussion including the keynote speakers, organizers, and the audience in an interactive manner.

KEYNOTE Speakers 

Name: Carlos Busso (The University of Texas at Dallas)

Title:  Multimodal generation of data-driven human-like behaviors for socially interactive agents

Abstract: Nonverbal behaviors externalized through head, face and body movements for socially interactive agents (SIAs) play an important role in human computer interaction (HCI). Believable movements for SIAs have to be meaningful and natural. Previous studies mainly relied on rule-based or speech-driven approaches. This presentation will discuss our effort to bridge the gap between these two approaches overcoming their limitations. These multimodal models have opened opportunities to generate characteristic behaviors associated with a given discourse class learning the rules from the data. These models capture principled temporal relationships and dependencies between speech and gestures that are carefully taken into account. The talk will also discuss strategies to quantify entrainment with the user to increase engagement and close the loop in the interaction. The advances in this area will lead to SIAs that can express meaningful human-like gestures that are timely synchronized with speech, enabling novel venues for artificial agents in human-machine interaction.

Bio: Carlos Busso is a Professor at the University of Texas at Dallas’s Electrical and Computer Engineering Department, where he is also the director of the Multimodal Signal Processing (MSP) Laboratory. His research interest is in human-centered multimodal machine intelligence and application, with a focus on the broad areas of speech processing, affective computing, and machine learning methods for multimodal processing. He has worked on speech emotion recognition, multimodal behavior modeling for socially interactive agents, and robust multimodal speech processing. He is a recipient of an NSF CAREER Award. In 2014, he received the ICMI Ten-Year Technical Impact Award. In 2015, his student received the third prize IEEE ITSS Best Dissertation Award (N. Li). He also received the Hewlett Packard Best Paper Award at the IEEE ICME 2011 (with J. Jain), and the Best Paper Award at the AAAC ACII 2017 (with Yannakakis and Cowie). He received the Best of IEEE Transactions on Affective Computing Paper Collection in 2021 (with R. Lotfian) and the Best Paper Award from IEEE Transactions on Affective Computing in 2022 (with Yannakakis and Cowie).  In 2023, he received the Distinguished Alumni Award in the Mid-Career/Academia category by the Signal and Image Processing Institute (SIPI) at the University of Southern California. He received in 2023 the ACM ICMI Community Service Award. He is currently serving as an associate editor of the IEEE Transactions on Affective Computing. He is a member of the ISCA, and AAAC and a senior member of ACM. He is an IEEE Fellow.

Name: Eakta Jain (University of Florida)

Title: Privacy in XR: Perspectives from the Eye Tracking Front lines

Abstract: Eye tracking provides critical cues for compelling XR experiences. Research has shown how gaze data is central to functional advances such as foveated rendering as well as user-centric advances such as communicating affect and personality in virtual avatars. At the same time, both research and practice have brought up vulnerabilities associated with large scale eye tracking data collection. In this talk, I will discuss some of these tradeoffs as well as open gaps for future research.

Bio:  Dr. Eakta Jain is an Associate Professor of Computer and Information Science and Engineering at the University of Florida. She received her PhD and MS degrees in Robotics from Carnegie Mellon University and her B.Tech. degree in Electrical Engineering from IIT Kanpur. She has industry experience at Texas Instruments R&D labs, Disney Research Pittsburgh, and the Walt Disney Animation Studios. Dr. Jain is interested in the safety, privacy and security of data gathered for user modeling, particularly eye tracking data. Her areas of work include graphics and virtual reality, generation of avatars, human factors in the future of work and transportation. Her research has been nominated for multiple best paper awards and been funded through faculty research awards from Meta and Google, federal funding from the National Science Foundation, National Institute of Mental Health, US Department of Transportation, and state funding from the Florida Department of Transportation. Dr. Jain is an ACM Senior Member. She served as the Technical Program Chair for ACM Symposium on Eye Tracking Research (2020) and Applications and ACM/Eurographics Symposium on Applied Perception (2021). She serves on the ACM SAP Steering Committee (2022-2024) and as a Director on the ACM SIGGRAPH Executive Committee (2022-2025)

Name: Michael Neff (University of California Davis

Title: The Challenge of Synthesizing Nonverbal Behavior

Abstract: There is growing evidence that embodiment brings substantial value to VR experiences.  For first person users, this generates a tracking problem: how can a person's motion be accurately tracked and projected into VR in real time.  For nonplayer characters or anyone who cannot be reasonably tracked, it generates a synthesis problem: how can appropriate motion be synthesized to match desired dialog and context.  In this talk, I will review some of our recent work on gesture synthesis and use it as a basis for talking more broadly about the many open challenges in synthesizing appropriate nonverbal behavior.

Bio: Michael Neff is a Professor in Computer Science and Cinema & Digital Media at the University of California, Davis where he leads the Motion Lab, an interdisciplinary research effort in character animation and embodied interaction. He holds a Ph.D. from the University of Toronto and is also a Certified Laban Movement Analyst. His research focus has been on character animation, especially modeling expressive movement, nonverbal communication, gesture and applying performing arts knowledge to animation.  Additional interests include human computer interaction related to embodiment, motion perception, character-based applications, motor control and VR/XR.   Select distinctions include an NSF CAREER Award, the Alain Fournier Award and several paper awards.  He is the former Chair of the Department of Cinema and Digital Media at UC Davis and current Chair of the Graduate Group in Computer Science.


PANELIST

Aniket Bera

Purdue University, USA

Eakta Jain

University of Florida, USA

Carlos Busso

UTD, USA

Michael Neff

UC Davis, USA

Oya Celiktutan

King's College, London

Aline Normoyle

Bryn Mawr College, USA

Pablo Cesar

CWI and TU Delft, The Netherlands

Chirag Raman

TU Delft, The Netherlands

Funda Durupinar

UMass Boston, USA

Zerrin Yumak

Utrecht University, The Netherlands

Mar Gonzalez-Franco

Google Labs, USA

Scope

This workshop invites researchers to submit original, high-quality research, survey, or position papers related to multi-modal affective and social behavior analysis and synthesis in XR. Relevant topics include, but are not limited to:

Important Dates


PROGRAM


All times are Florida, USA local time (UTC-4).  See further details here.

Time Converter

Submission Instructions

Authors are invited to submit research, survey, work-in-progress, or position papers:

Papers will be included in the IEEE Xplore library. Authors are encouraged to submit videos to aid the program committee in reviewing their submissions. Please anonymize your submissions, as the workshop uses a double-blind review process. Authors of accepted papers are expected to register and present their papers at the workshop. 

Papers should use the IEEE VR formatting guidelines and be submitted through the IEEE VR 2024 Precision Conference System (PCS).

When starting your submission, please make sure to select the relevant track for the workshop "IEEE VR 2024 - Multi-modal Affective and Social Behavior Analysis and Synthesis in Extended Reality".


Organizing Committee

Funda Durupinar

University of Massachusetts Boston, USA

Zerrin Yumak

Utrecht University, The Netherlands

Oya Celiktutan

King's College, London

Pablo Cesar

CWI and TU Delft, The Netherlands

Aniket Bera

Purdue University, USA

Chirag Raman

TU Delft, The Netherlands

Aline Normoyle

Bryn Mawr College, USA

INTERNATIONAL Program Committee



Contact

If you have any questions or remarks regarding this workshop, please contact Funda Durupinar (funda.durupinarbabur[at]umb.edu) or Zerrin Yumak (Z.Yumak[at]uu.nl).