Robust keyframe selection for motion capture editing using adaptive exponent-based optimization
Yeong-Seok Kim* Jong-Hyun Kim*
(* : Inha University)
IEEE Access 2026
Yeong-Seok Kim* Jong-Hyun Kim*
(* : Inha University)
IEEE Access 2026
Abstract : Motion capture data play a central role in animation and virtual character production; however, their high frame density incurs substantial costs in editing and refinement. Conventional dynamic programming (DP)-based keyframe selection methods have been widely used to minimize reconstruction error, but they treat errors in all temporal segments with uniform sensitivity, which limits their ability to reflect local importance and nonlinear characteristics of motion. As a result, excessively large keyframe gaps may occur in quasi-static segments, and perceptually salient distortions can accumulate in segments with strong rotations or abrupt pose changes, degrading editing robustness and visual quality. In this paper, we propose an adaptive exponent-based keyframe selection framework that accounts for local motion complexity. Our method quantifies frame-wise nonlinearity using a sliding window and adaptively reweights the sensitivity of reconstruction error via an exponent function, thereby prioritizing error suppression in perceptually important segments. We further incorporate a DP optimization with a maximum keyframe-gap constraint to ensure editing-friendly temporal continuity even in low-complexity or near-static segments. Experiments on diverse motion capture sequences (including rotations, contact transitions, direction changes, and impact-driven motions) demonstrate that the proposed approach produces more perceptually consistent reconstructions than conventional error-based methods while yielding stable and well-balanced keyframe distributions. These results suggest that our framework can serve as a practical keyframe selection strategy for motion editing, retargeting, and real-time preview applications.
[paper]