ByteDance (字节跳动) is actively developing a broad range of digital human technologies across its platforms, particularly for use in Douyin, PICO, and Volcengine. Its initiatives span AI-generated avatars, video synthesis, lip-syncing, and 3D modeling through innovations such as OmniHuman and LatentSync. ByteDance has open-sourced several tools, including LatentSync 1.5 and 1.6, for precise audio-driven lip synchronization and personalized avatar creation. These digital humans are used in livestreaming, content creation, marketing, and AI-powered dialogue agents. ByteDance collaborates with academic institutions like Beijing Jiaotong University and HKU, and integrates its digital humans into products like CapCut and Jiemon AI. The company also supports private deployment, enabling businesses and developers to create and operate their own digital human systems at scale, with some projects producing over 1,000 digital avatars.
2026
2026-01-23: TikTok announced a TikTok US data security joint venture was established; ByteDance remains the largest single shareholder.
2026-01-23: ByteDance’s options price was raised to 200.41 USD per share.
2026-01-21: ByteDance will establish Qiaomu Media Technology Company in Beijing.
2026-01-21: 21st Century Business Herald reported that Zhejiang Wentou Interconnect and ByteDance subsidiary Ocean Engine co-built the “派智” animated digital human, with last year’s cumulative spend exceeding 250 million yuan and 5x YoY growth.
2026-01-20: Zhejiang Wentou Interconnect and ByteDance subsidiary Ocean Engine co-built the “派智” animated digital human, with 2025 cumulative spend exceeding 250 million yuan and 5x YoY growth, marking marketing-scene commercialization entering a mature stage.
2026-01-19: ByteDance’s latest large models were used to support Douyin’s digital human, intelligent customer service, and AI.
2026-01-07: ByteDance responded to “car-making rumors,” stating the claims were false and there is no plan to build cars.
2025
2025-12-29: A company built a localized service network and listed ByteDance among served clients; during interactive demos, audiences experienced Loki Emotion real-time driving a digital human model.
2025-12-22: An AI digital human video generation tool supported by the ByteDance ecosystem focused on “one sentence to generate marketing videos” for e-commerce, local merchants, knowledge creators, and enterprise promotion with zero shooting and zero editing.
2025-12-17: ByteDance’s open-source digital human project LatentSync was updated to version 1.6.
2025-12-15: ByteDance promoted a unified multimodal generative model supporting interwoven and simultaneous multimodal generation (e.g., digital humans) and invited people to join ByteDance Seed.
2025-11-03: ByteDance open-sourced MimicTalk, a jointly open-sourced 3D digital human head project with Zhejiang University aimed at quickly creating 3D digital human heads.
2025-10-13: ByteDance-owned CapCut was described as centered on editing, with the digital human feature positioned as one of its functions.
2025-10-10: ByteDance’s open-sourced LatentSync1.5 was described as significantly improving digital human technology by optimizing algorithms and training datasets.
2025-10-01: ByteDance’s OmniHuman 1.5 was described as an advanced AI video model that can create digital human avatar videos from a single image and audio via Seedance on CapCut Web.
2025-09-24: ByteDance’s digital human team was described as having launched OmniHuman-1.5.
2025-09-12: ByteDance’s Intelligent Creation Lab was described as having produced OmniHuman-1.5.
2025-09-10: ByteDance’s OmniHuman-1.5 was described as a major breakthrough in AI video generation, using single-image and audio inputs to improve realism and emotional expression.
2025-09-06: ByteDance’s digital human team launched OmniHuman-1.5 and proposed a new virtual human generation framework giving virtual humans “thinking” and “expression” abilities.
2025-09-04: ByteDance’s commercialization GenAI team and Zhejiang University jointly launched InfinityHuman, a commercial-grade long-sequence audio-driven human video generation model for long-video scenarios.
2025-08-28: ByteDance’s OmniHuman-1.5 was described as injecting “active mind” into virtual humans.
2025-08-17: ByteDance’s team was described as supporting multiple business lines including Douyin, CapCut, Jimeng, Doubao, and commercialization, focusing on image/video generation, intelligent editing, and digital humans.
2025-07-22: ByteDance’s Chimera digital human platform was described as built by its Intelligent Creation Digital Human team, relying on Volcano Engine’s AI large models to provide digital humans, outfit swapping, and video translation.
2025-07-22: ByteDance subsidiary Volcano Engine was reported to be closed-testing its next-generation digital human platform “Chimera,” built by ByteDance’s Intelligent Creation Digital Human team and based on Volcano Engine’s AI large-model technology.
2025-07-22: The Chimera platform was described as handled by ByteDance’s Intelligent Creation Digital Human team and backed by Volcano Engine’s large-model capabilities, providing digital human image generation, outfit swapping, and video translation, with future pricing by API calls or video.
2025-07-21: ByteDance subsidiary Volcano Engine was reported to be closed-testing a new digital human platform named “Chimera,” with ByteDance’s Intelligent Creation Digital Human team building it, using invitation-only testing, temporarily free, and possibly opening public testing at month-end.
2025-07-21: Chimera was described as an enterprise virtual digital human solution launched by ByteDance subsidiary Volcano Engine, relying on ByteDance self-developed large models and audio/video technology.
2025-07-20: ByteDance released the video model “悟空➕,” described as “the strongest digital human.”
2025-06-05: ByteDance’s Volcano Engine team customized augmented-reality effects for a festival, and the first intangible cultural heritage digital human “非非” was described as relying on Volcano Engine’s Doubao large model.
2025-06-04: LatentSync1.5 was described as an end-to-end lip-sync framework jointly open-sourced by ByteDance and Beijing Jiaotong University.
2025-05-31: ByteDance launched Xiaoyunque AI, described as one-click generation of digital human videos and design images.
2025-05-11: ByteDance described digital human generation capabilities including high-precision lip-sync and micro-expression control, supported by its self-developed algorithm system.
2025-05-12: Jimeng Digital Human was described as a ByteDance AI-based digital human generation tool that creates realistic dynamic digital human videos from a photo and audio.
2025-04-09: ByteDance launched the OmniHuman-1 digital human model, described as generating digital humans from a single photo and an audio clip.
2025-03-10: Jimeng AI’s digital human feature was described as launching “master mode,” driven by ByteDance’s self-developed OmniHuman-1 model and requiring only an image and an audio clip.
2025-03-09: Jimeng AI “action imitation” was described as provided by ByteDance’s Intelligent Creation Digital Human team.
2025-03-07: Jimeng AI’s digital human feature was described as officially launching “master mode,” driven by ByteDance’s self-developed OmniHuman-1 model.
2025-03-02: OmniHuman-1 was described as a ByteDance self-developed multimodal video generation AI model that generates videos from a single image and an audio track.
2025-02-28: Shenzhen Technology University’s digital human “润晓知” was described as jointly built with ByteDance subsidiary Volcano Engine.
2025-02-10: Goku was described as a video generative foundation model jointly developed by HKU and ByteDance.
2025-02-10: HKU and ByteDance jointly released the Goku (悟空) video generation model, described as generating high-quality video from text/images and directly generating virtual digital human interactive content.
2025-02-09: OmniHuman was described as a ByteDance self-developed closed-source model that will go live on Jimeng, supporting portrait/half-body formats.
2025-02-07: OmniHuman was described as a ByteDance self-developed closed-source model and will go live on Jimeng.
2025-02-06: ByteDance’s digital human team launched the OmniHuman multimodal digital human solution, described as generating videos from a single image plus input audio.
2025-02-06: ByteDance’s Intelligent Creation Digital Human team was described as part of ByteDance’s AI & multimedia technology platform, supporting internal products such as Douyin, CapCut, and Toutiao.
2025-02-05: ByteDance released OmniHuman, described as turning static images into expressive, performance-capable outputs; Jimeng OmniHuman-1 was described as a ByteDance AI digital human product.
2025-02-03: ByteDance’s research team released the OmniHuman-1 human animation generation framework.
2025-01-25: Coze was described as a ByteDance new-generation one-stop AI Bot development platform.