Human behavior involves not only language expression, but also facial expressions, body movements, voice tone, and other modalities. Understanding and simulating human behavior requires the integration of these multimodal information, rather than relying on a single modality. Going deeper in field of Multimodal Human Behavior Understanding and Generation (MUG) can benefit the deep understanding multimodal nature of human behavior. Furthermore, Multimodal understanding and generation of human behavior can help computer systems better perceive, understand and respond to human intentions and emotional states. This can make human-machine interaction more natural and smooth, thereby enhancing user experience. Hence, this special session aims to profile recent developments on multimodal biometric system, especially on trustworthy multimodal data integration, ognitive and neurological underpinnings, generative modeling on human behavior, and potential in board real-world applications.
We invite practitioners, researchers and engineers from biometrics, signal processing, computer vision and machine learning to contribute their expertise to underpin the highlighted challenges.