SoulShell is the public-facing brand used by Digital Xusheng (Beijing) Technology Co., Ltd. (数字栩生), a Beijing-based company that positions itself as a core-technology provider for digital humans, emphasizing an end-to-end stack for producing high-fidelity 3D digital human assets and running them in real time for interactive applications. It promotes light-field reconstruction/capture and industrialized pipelines for lowering digital-human asset production cost while maintaining realism, and pairs this with a real-time AI “digital human brain” intended to support natural interaction in deployment scenarios. Public-facing materials and third-party coverage associate the company with heritage and performance-oriented reconstructions such as “Digital Mei Lanfang,” and it is also presented in partnership case materials as integrating its interactive 3D digital-human capabilities with advanced 3D display hardware for exhibition and experience settings.
In Digital Mei Lanfang (数字梅兰芳), Beijing Institute of Technology (北京理工大学) is described as the primary technical contributor on the digital-human side of the collaboration, supplying the imaging, reconstruction, simulation, and real-time system engineering needed to build an interactive “twin digital human” of Mei Lanfang (梅兰芳孪生数字人). The project is framed as a reconstruction problem: instead of capturing a living subject, the goal is to rebuild a historically grounded performance-grade digital human from archival materials and expert reenactment, then make it operable in real time for staged and interactive digital-theatre use.
The asset pipeline attributed to Beijing Institute of Technology (北京理工大学) is described as beginning with the assembly of a large historical photo corpus and then moving through a physically anchored reference stage: a 1:1 sculpture is produced with Central Academy of Fine Arts (中央美术学院), and that physical sculpture is then laser-scanned at high precision to recover base geometry and baseline facial expressions. From that scanned foundation, the workflow proceeds through digital sculpting and detailed refinement intended to recover fine anatomical structure and visual fidelity, with descriptions emphasizing micro-structure work such as bone/landmark alignment, pore-level skin detail, hair detail, and eye highlight behavior that supports photorealistic close-up viewing.
Rendering and wardrobe work is described in similarly production-oriented terms. The reported emphasis is photorealistic skin-texture rendering combined with high-realism costume construction, where garments are built using real-world tailoring logic and simulated fabric materials, with parameters referenced to period textile samples so the final look reads as historically plausible rather than merely stylized. This positions the digital human as suitable for theatre and cultural presentation, where costume and surface realism are as important as facial likeness.
For performance, the project description ties Beijing Institute of Technology (北京理工大学) to the end-to-end “driving” pipeline that makes the reconstructed model behave like a stage performer rather than a static museum figure. Action acquisition is described as being supported by Central Academy of Drama (中央戏剧学院) performers who reproduce Mei-style movement as reference, after which a role-generation/driving process is iteratively tuned against archival performance materials to align full-body motion, gesture cadence, and fine facial and behavioral cues. The stated goal is not only accurate motion but also faithful performance semantics, so the digital human can deliver recognizable stage language under real-time control.
Voice is described as another reconstruction-and-upgrade layer: because archival recordings are low fidelity, the approach reported is to record high-quality speech data from skilled human imitators and bind that to the digital human so it can speak in a way that is usable for modern interactive presentation. Deployment goals are described as real-time, immersive, and interactive experiences—including VR-oriented presentations—where audiences can engage the digital human directly, with Tencent (腾讯) mentioned as providing partial technical support and Chinese Academy of Sciences Institute of Automation (中国科学院自动化研究所) referenced as an additional research collaborator in the broader technical ecosystem.