The Chinese registered company name for Soul App (Soul) is 上海任意门科技有限公司. 上海任意门科技有限公司 is a single Chinese legal entity whose name is rendered in English in two common ways because there is no universally enforced English version: “Shanghai Anyimen Technology” is a pinyin-style transliteration that approximates the sound of 任意门 (Rènyìmén), while “Shanghai AnyDoor Technology Co., Ltd.” is a meaning-based translation that renders 任意门 as “Any Door,” producing an English name that reads naturally and conveys the concept; as a result, different sources may adopt either form for branding, press coverage, or database entries even though they refer to the same company.
Soul App (Soul) is operated by Shanghai Anyimen Technology Co., Ltd. and sits in the “digital human” landscape primarily as an avatar-first social platform that treats user interaction through digital identities as the default interface, while also productizing and publishing core enabling technology for real-time synthetic characters. It positions itself as an AI-native social product and states that all users socialize via digital avatars, supported by a self-developed large model called Soul X (Soul X) and related recommendation systems, while its AI team brand Soul AI Lab has open-sourced SoulX-FlashTalk, a real-time digital human generation model described as achieving about 0.87 seconds start-up latency and 32 fps real-time throughput at a 14B model scale, aligning the company with low-latency, long-duration, audio-driven avatar streaming for live interactive use cases rather than offline rendered “talking head” content; this open release builds on earlier public technical disclosures describing the application of “visual digital human” techniques inside the Soul product experience to blend digital-human generation with social interaction flows.
Soul AI Lab is the AI research and engineering group associated with Soul App, with a visible open-source presence that publishes model code and research artifacts in public repositories rather than only product announcements. Its recent work points to two practical directions: long-form, multi-speaker speech generation for podcast-style audio (SoulX-Podcast), and real-time, audio-driven digital human/avatar streaming intended for interactive use (SoulX-FlashTalk), with the latter positioned around low-latency start-up and real-time frame-rate output for sustained live generation. Across Soul’s broader AI disclosures, the lab’s outputs align with a platform strategy that treats internally developed models as infrastructure for “AIGC + social” features such as conversational agents, AI-assisted chatting, and virtual companionship experiences within the Soul ecosystem.