I'm Che-Jui (Jerry) Chang, a PhD student from Department of Computer Science at Rutgers University. I am supervised by Prof. Mubbasir Kapadia at Intelligent Visual Interfaces Lab.Β
ππ My first-authored paper "Learning from Synthetic Human Group Activities" has been accepted to CVPR24.
ππ The follow-up paper "On the Equivalency, Substitutability, and Flexibility of Synthetic Data" has been accepted to SynData4CV @ CVPR24.
Last update on April 17, 2024
Research
My research interest lies in the fields of computer vision, graphics, and multimodal interaction, with a specific focus on problems associated with virtual humans, embodied intelligent agents, and human-agent interaction. My past research includes designing generative models for animating human faces, hand and body gestures, and collective group activities. Additionally, Iβve extensively researched the capability of LLMs for conversational avatars, digital storytelling, narrative graphs, and autonomous social agents. Another branch of my research focus is to unleash the potential of synthetic data, powered by CG render engines and generative diffusion models, to ultimately address data scarcity and improve model generalizability in computer vision tasks, leading to more robust and reliable AI systems.
Contact me at chejui.chang@rutgers.edu or check out my LinkedIn
Education
PhD, Computer Science, Rutgers University, 2020-present
MS, Communication Engineering, National Taiwan University, 2016-2018
BS, Physics, National Taiwan University, 2012-2016
Publications
**New** C. -J. Chang, D. Li, S. Moon, M. Kapadia, "On the Equivalency, Substitutability, and Flexibility of Synthetic Data," arXiv preprint (SynData4CV @ CVPR24)
**New** C. -J. Chang, D. Li, D. Patel, P. Goel, H. Zhou, S. Moon, S. S. Sohn, S. Yoon, V. Pavlovic, M. Kapadia, "Learning from Synthetic Human Group Activities," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024 [Project Page]
C. -J. Chang, S. S. Sohn, S. Zhang, R. Jayashankar, M, Usman, M. Kapadia, "The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents," ACM Conference on Intelligent User Interfaces 2023
C. -J. Chang, S. Zhang, and M. Kapadia, "The IVI Lab entry to the GENEA Challenge 2022βA Tacotron2 based method for co-speech gesture generation with locality-constraint attention mechanism," International Conference on Multimodal Interaction, 2022 (Reproducibility Award, Best Appropriateness!!)
C. -J. Chang, L. Zhao, S. Zhang, and M. Kapadia, "Disentangling Audio Content and Emotion with Adaptive Instance Normalization for Expressive Facial Animation Synthesis," Computer Animation and Virtual Worlds, 2022.
C. -J. Chang, "Transfer Learning from Monolingual ASR to Transcription-free Cross-lingual Voice Conversion," arXiv preprint arXiv:2009.14668, 2020.
C. -J. Chang, and S. -K. Jeng, "Acoustic Anomaly Detection Using Multilayer Neural Networks and Semantic Pointers," Journal of Information Science and Engineering 2020.
C. -J. Chang, "Humanoid Auditory Processing for Acoustic Anomaly Detection,β National Taiwan University, Master Thesis, 2018.
Experience
PhD Research Intern @ Roblox, Summer 2023 - Present
Graduate Assistant @ Rutgers University, Summer 2022 - Present
Teaching Assistant, CS439 Introduction to Data Science @ Rutgers University, Spring 2022
Teaching Assistant, CS523 Computer Graphics @ Rutgers University, Fall 2021
Teaching Assistant, CS206 Introduction to Discrete Structures II @ Rutgers University, Summer 2021
Teaching Assistant, CS205 Introduction to Discrete Structures I @ Rutgers University, Spring 2021
AI Scientist, Red Pill Lab, 2019-2020
Machine Learning Engineer Intern @ Aeolus Robotics, 2018
Research Assistant @ Academia Sinica, 2018
Research Assistant @ National Taiwan University, 2016
Referee Experience
Program Committee, International Conference on Multimodal Interaction (ICMI) 2024
Program Committee, International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2022
International Conference on Multimodal Interaction (ICMI) 2023
ACM Siggraph/Eurographics Symposium on Computer Animation (SCA) 2022
European Association for Computer Graphics (Eurographics) 2022
Association for the Advancement in Artificial Intelligence (AAAI) Conference 2022
IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) 2021.
Conference on Neural Information Processing Systems (NIPS) 2021.
IEEE International Conference on Data Mining (ICDM) 2021.
International Conference on Computer Animation and Social Agents (CASA) 2021.
ACM Siggraph/Eurographics Symposium on Computer Animation (SCA) 2021.
IEEE Transactions on Visualization and Computer Graphics (TVCG) 2021.
Speaking
"Learning from Synthetic Data for Human-Centered Tasks", at National Taiwan University, 2024.
"The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents", at International Conference on Intelligent User Interfaces, Sydney, 2023.
"The IVI Lab entry to the GENEA Challenge 2022 β A Tacotron2 Based Method for Co-Speech Gesture Generation With Locality-Constraint Attention Mechanism", at International Conference on Multimodal Interaction, 2022.
"Animating Expressive Human Faces: From Parametric 3D Models To Data-Driven Approaches", at Rutgers University, 2022.
"Disentangling Audio Content and Emotion with Adaptive Instance Normalization for Expressive Facial Animation Synthesis", at International Conference on Computer Animation and Social Agents, 2022.
"3D Facial Reconstruction from Monocular Videos and its Applications", at Rutgers University, 2021
Patent
Method for converting voice into virtual face image (CN112992120A)
Past Projects
LLM for Digital Storytelling, narrative graphs, and autonomous social agents
Real-time Body Tracking for Personalized Avatars
3D Group Activity Generation
Synthetic Data Generation for Human Motions and Group Activities
Evaluation of Emotion Perception for Multimodal ECAs
Audio-Driven Gesture Generation
Emotion-Conditioned Speech-Driven Facial Animation Synthesis
Speech-Driven Lip Synchronization