Human behavior involves not only language expression, but also facial expressions, body movements, voice tone, and other modalities. Understanding and simulating human behavior requires the integration of these multimodal information, rather than relying on a single modality. Going deeper in field of Multimodal Human Behavior Understanding and Generation (MUG) can benefit the deep understanding multimodal nature of human behavior. Furthermore, Multimodal understanding and generation of human behavior can help computer systems better perceive, understand and respond to human intentions and emotional states. This can make human-machine interaction more natural and smooth, thereby enhancing user experience. Hence, this special session aims to profile recent developments on multimodal biometric system, especially on trustworthy multimodal data integration, ognitive and neurological underpinnings, generative modeling on human behavior, and potential in board real-world applications. 


We invite practitioners, researchers and engineers from biometrics, signal processing, computer vision and machine learning to contribute their expertise to underpin the highlighted challenges. 

Organizing Committee:

Important Dates:

Full Paper Submission: July 3, 2024 extended to July 10, 2024, 23:59:59 PDT  

Acceptance Notice: July 24, 2024, 23:59:59 PDT  

Camera-Ready Paper:  TBD

Topics of Interest:

The Special Session will focus on all aspects related to multimodal human behavior spotting, recognition, and generation. More specifically, the committee encourages the submission of papers making fundamental or practical contributions to Multimodal Understanding and Generation in connection with various biometric topics including, but is not limited to:

Submission Guidelines: