Neural Kinematic Networks

for Unsupervised Motion Retargeting

Ruben Villegas, Jimei Yang, Duygu Ceylan and Honglak Lee

CVPR 2018 (Oral Presentation)


We propose a recurrent neural network architecture with a Forward Kinematics layer and cycle consistency based adversarial training objective for unsupervised motion retargetting. Our network captures the high-level properties of an input motion by the forward kinematics layer, and adapts them to a target character with different skeleton bone lengths (e.g., shorter, longer arms etc.). Collecting paired motion training sequences from different characters is expensive. Instead, our network utilizes cycle consistency to learn to solve the Inverse Kinematics problem in an unsupervised manner. Our method works online, i.e., it adapts the motion sequence on-the-fly as new frames are received. In our experiments, we use the Mixamo animation data to test our method for a variety of motions and characters and achieve state-of-the-art results. We also demonstrate motion retargetting from monocular human videos to 3D characters using an off-the-shelf 3D pose estimator.

Demo (Baseline: Copy quaternions and velocities):


Online motion Retargetting visualization for Mixamo-to-Mixamo and Human 3.6M-to-Mixamo. (Global motion sign is flipped when input and retargetted motion are moving in opposite directions)

Motion retargetting from Mixamo to Mixamo

Motion retargetting from pose estimates in the Human 3.6M to Mixamo characters

(never trained with pose estimates or skeleton from Human 3.6M)

Human 3.6M Denoising

(Green: Input; Red: Ours)