Human behavior involves not only language expression, but also facial expressions, body movements, voice tone, and other modalities. Understanding and simulating human behavior requires the integration of these multimodal information, rather than relying on a single modality. Going deeper in field of Multimodal Human Behavior Understanding and Generation (MUG) can benefit the deep understanding multimodal nature of human behavior. Furthermore, Multimodal understanding and generation of human behavior can help computer systems better perceive, understand and respond to human intentions and emotional states. This can make human-machine interaction more natural and smooth, thereby enhancing user experience. Hence, this special session aims to profile recent developments on multimodal biometric system, especially on trustworthy multimodal data integration, ognitive and neurological underpinnings, generative modeling on human behavior, and potential in board real-world applications.
We invite practitioners, researchers and engineers from biometrics, signal processing, computer vision and machine learning to contribute their expertise to underpin the highlighted challenges.
Organizing Committee:
Zitong Yu, Great Bay University, China (yuzitong@gbu.edu.cn)
Siyang Song, University of Leicester, UK (ss1535@leicester.ac.uk)
Weicheng Xie, Shenzhen University, China (wcxie@szu.edu.cn)
Xin Liu, Lappeenranta-Lahti University of Technology, Finland (xin.liu@lut.fi)
Linlin Shen, Shenzhen University, China (llshen@szu.edu.cn)
Important Dates:
Full Paper Submission: July 3, 2024 extended to July 10, 2024, 23:59:59 PDT
Acceptance Notice: July 24, 2024, 23:59:59 PDT
Camera-Ready Paper: TBD
Topics of Interest:
The Special Session will focus on all aspects related to multimodal human behavior spotting, recognition, and generation. More specifically, the committee encourages the submission of papers making fundamental or practical contributions to Multimodal Understanding and Generation in connection with various biometric topics including, but is not limited to:
Foundation models for understanding and generation in broad biometrics (not limited to face, fingerprint, iris, palm print, gait, speech, bio-signal, etc)
Unified understanding and generation in broad biometrics (not limited to face, fingerprint, iris, palm print, gait, speech, bio-signal, etc)
Novel methodologies on multimodal biometric security such as multimodal face spoofing and forgery detection
New synthesis models for audio/video driven digital human generation (e.g., face reaction generation)
Adversarial attack and defense in multimodal biometrics
Theoretical analysis of robustness, generalization, and interpretability in multimodal biometrics
Learning with fewer labels in multimodal biometrics
Subtle emotional human behavior analysis (e.g., deception detection) from multimodal clues (not limited to micro-expression and micro-gesture)
Submission Guidelines:
Paper presented at MUG-2024 will be published as part of the IJCB2024 and should, therefore, follow the same guideline as the main conference.
Submit your papers at: https://cmt3.research.microsoft.com/MUG2024
The LaTex/Word templates for the paper submission can be found in Paper Submission.
Page limit: A paper can be up to 8 pages including figures and tables, plus an unlimited number of additional pages for references only.
Supplementary: Authors can submit one supplementary file. The size should not exceed 100 MB (maximum allowable by CMT) and can be in doc, pdf or zip format.
Papers will be double-blind peer reviewed by at least three reviewers. Please remove author names, affiliations, email addresses, etc. from the paper. Remove personal acknowledgments.