"CASIM: Composite Aware Sementic Injection for Text to Motion Generation," ArXiv 2025 [Project Page]
C. -J. Chang*, Q. T. Liu*, H. Zhou, V. Pavlovic, M. Kapadia
"From Words to Worlds: Transforming One-line Prompt into Immersive Multi-modal Digital Stories with Communicative LLM Agent," MIG 2024
S. S. Sohn, D. Li, S. Zhang, C. -J. Chang, M. Kapadia
"BattleAgent: Multi-modal Dynamic Emulation on Historical Battles to Complement Historical Analysis," EMNLP 2024
S. Lin, W. Hua, L. Li, C.-J Chang, L. Fan, J. Ji, H. Hua, M. Jin, J. Luo, Y Zhang
"Learning from Synthetic Human Group Activities," CVPR 2024 [Project Page]
C. -J. Chang, D. Li, D. Patel, P. Goel, H. Zhou, S. Moon, S. S. Sohn, S. Yoon, V. Pavlovic, M. Kapadia,
"On the Equivalency, Substitutability, and Flexibility of Synthetic Data," SynData4CV @ CVPR24
C. -J. Chang, D. Li, S. Moon, M. Kapadia,
"The Importance of Multimodal Emotion Conditioning and Affect Consistency for Embodied Conversational Agents," IUI 2023
C. -J. Chang, S. S. Sohn, S. Zhang, R. Jayashankar, M, Usman, M. Kapadia,
"The IVI Lab entry to the GENEA Challenge 2022–A Tacotron2 based method for co-speech gesture generation with locality-constraint attention mechanism," ICMI 2022 (Reproducibility Award, Best Appropriateness!!)
C. -J. Chang, S. Zhang, and M. Kapadia
"Disentangling Audio Content and Emotion with Adaptive Instance Normalization for Expressive Facial Animation Synthesis," CASA/CAVW 2022
C. -J. Chang, L. Zhao, S. Zhang, and M. Kapadia
"Transfer Learning from Monolingual ASR to Transcription-free Cross-lingual Voice Conversion," ArXiv 2020
C. -J. Chang,
"Acoustic Anomaly Detection Using Multilayer Neural Networks and Semantic Pointers," JISE 2020
C. -J. Chang, and S. -K. Jeng