Across China-related 2023 scholarship, “digital humans” and closely adjacent virtual beings are treated as an end-to-end sociotechnical stack that runs from controllable visual embodiment to audience-facing interaction and, finally, to measurement and governance questions. On the embodiment side, multiple works concentrate on making faces and bodies move plausibly and stably under real-time constraints, emphasizing speech-driven facial motion, the synthesis of time-varying human imagery, and practical driving pipelines that can be integrated into contemporary interactive media production; taken together, these studies reflect a continued prioritization of expressive fidelity, temporal coherence, and portability across characters and deployment settings. A second cluster shifts from animation mechanics to interactive effectiveness, asking how conversational or interactive digital humans function in cultural communication contexts and what conditions shape perceived engagement, comprehension, and persuasion when audiences encounter such agents in mediated settings. A third line of work treats virtual performers and streamers as a distinct but adjacent ecosystem, examining fandom as both consumption and co-creation, the experiential dimensions of intimacy and parasocial bonding in East Asian contexts, and the way pandemic-era conditions accelerated platformed cultural production in China, thereby linking technical affordances to new cultural routines and labor arrangements. Complementing these perspectives, bibliometric mapping efforts frame the field’s evolution as a rapid expansion with identifiable thematic migration from core rendering and animation problems toward application domains, evaluation practices, and multidisciplinary human-factors inquiry, while analytics-oriented research on digital personality signals a push to formalize individual differences for segmentation, personalization, and interpretation.
Chen, J.; Ma, X.; Wang, L.; Cheng, J. (2023). Blendshape-Based Migratable Speech-Driven 3D Facial Animation with Overlapping Chunking-Transformer. Chinese Conference on Pattern ….
Chen, S.; Zhang, D.; Shi, W.; Ding, X.; Chang, L. (2023). Exploring the Efficacy of Interactive Digital Humans in Cultural Communication. International Forum on Digital ….
Lee, S. & Lee, J. (2023). “Ju. T'aime” my idol, my streamer: A case study on fandom experience as audiences and creators of VTuber concert. IEEE Access.
Liberati, N. & Chen, J. J. (2023). Augmented Galatea for Physical Pygmalion: A Phenomenological Approach to Intimacy in VTubers in the East Asia Region. Augmented reality and artificial intelligence: The ….
Liu, R. (2023). WHAT AFFECT VTUBER AUDIENCE BEHAVIOR?: Empirical Evidence from China (Doctoral dissertation, Waseda University).
Qi, Y. & Sun, Y. (2023). Visualization and Bibliometric Analysis of Research Evolution on Digital Human. Proceedings of the.
Regis, R. & Tavares, V. P. (2023). VTubers and pandemic in China: a new dimension of technological cultural production. Revista ….
Shen, S. Y. & Zhang, W. (2023). A method for synthesizing dynamic image of virtual human. 2023 3rd International Conference on …, 2023.
Usami, Y.; Kitaoka, K.; Shindo, K. (2023). Integrated Artificial Intelligence for Making Digital Human. Proceedings of the.
Wang, P., & Lu, Z. (2023). Let’s play together through channels: Understanding the practices and experience of Danmaku participation game players in China. Proceedings of the ACM on Human-Computer Interaction, 7(CHI PLAY), 1025-1043.
Wang, T.; Ye, P.; Lv, H.; Gong, W.; Lu, H. (2023). Modeling digital personality: A fuzzy-logic-based myers–briggs type indicator for fine-grained analytics of digital human. IEEE Transactions on ….
Wei, L.; Wang, Y.; Li, D. (2023). Research on Speech-based Digital Human Driving System Based on Unreal Engine. 2023 IEEE International Symposium on …, 2023.