Kunlun Tech (昆仑万维) is positioned in the digital-human space primarily through its SkyReels product line, which is repeatedly described as a sequence of open-sourced and platformized models for generating and controlling human-like talking avatars and “digital human” video: SkyReels-A1 is presented as an early controllable digital-human generation model aimed at high-fidelity portrait animation with explicit expression and motion control using video-diffusion methods, SkyReels-A2 is described as adding a controllable video-generation framework, and SkyReels-A3 is characterized as an audio-driven digital-human model built on a DiT (Diffusion Transformer) video-diffusion architecture that can animate a still portrait or reference image/video so the person speaks with stronger lip-sync, facial/subject stability, and more natural motion, explicitly targeting practical workflows such as digital-human livestream commerce and other speech-driven avatar content; later references frame SkyReels as a broader multimodal creation platform in which “talking avatar” (audio-driven virtual character) output is treated as a core module alongside image-to-video generation, video extension, and related creation features, with some claims extending to single-shot multi-person, multi-turn dialogue scenes and improved audio–video alignment for interactive or instructional content.