A deep understanding of kinematic structures and movable components is essential for enabling robots to manipulate objects and model their own articulated forms. Such understanding is captured through articulated objects, which are essential for tasks such as physical simulation, motion planning, and policy learning. However, creating these models, particularly for objects with high degrees of freedom (DoF), remains a significant challenge. Existing methods typically rely on motion sequences or strong assumptions from hand-curated datasets, which hinders scalability. In this paper, we introduce Kinematify, an automated framework that synthesizes articulated objects directly from arbitrary RGB images or textual descriptions. Our method addresses two core challenges: (i) inferring kinematic topologies for high-DoF objects and (ii) estimating joint parameters from static geometry. To achieve this, we combine MCTS search for structural inference with geometry-driven optimization for joint reasoning, producing physically consistent and functionally valid descriptions. We evaluate Kinematify on diverse inputs from both synthetic and real-world environments, demonstrating improvements in registration and kinematic topology accuracy over prior work.
Overview
A part-aware 3D foundation model first reconstructs a segmented digital twin. Then, the kinematic tree is recovered via Monte Carlo Tree Search (MCTS) driven by rewards for structure, stability, contact, symmetry, and hierarchy. Finally, joint types are predicted by a vision language model (VLM), and joint parameters are optimized on the parent link’s signed distance field (SDF) to enforce contact consistency and avoid collisions.
Experiments
Examples of articulated objects generated by Kinematify. Each row shows different objects across a sequence of joint configurations.
Qualitative comparison of articulation recovery on everyday objects across three methods: Kinematify (ours), Articulate Anymesh, and ArtGS. The red line indicates the joint direction.
Citation
Notice: You may not copy, reproduce, distribute, publish, display, perform, modify, create derivative works, transmit, or in any way exploit any such content, nor may you distribute any part of this content over any network, including a local area network, sell or offer it for sale, or use such content to construct any kind of database.