Unsupervised Monocular 3D Keypoint Discovery
from Multi-View Diffusion Priors
Subin Jeon, In Cho, Junyoung Hong, Seon Joo Kim
Yonsei University
Unsupervised Monocular 3D Keypoint Discovery
from Multi-View Diffusion Priors
Subin Jeon, In Cho, Junyoung Hong, Seon Joo Kim
Yonsei University
This paper introduces KeyDiff3D, a framework for unsupervised monocular 3D keypoints estimation that accurately predicts 3D keypoints from a single image. While previous methods rely on manual annotations or calibrated multi-view images, both of which are expensive to collect, our method enables monocular 3D keypoints estimation using only a collection of single-view images. To achieve this, we leverage powerful geometric priors embedded in a pretrained multi-view diffusion model. In our framework, this model generates multi-view images from a single image, serving as a supervision signal to provide 3D geometric cues to our model. We also use the diffusion model as a powerful 2D multi-view feature extractor and construct 3D feature volumes from its intermediate representations. This transforms implicit 3D priors learned by the diffusion model into explicit 3D features. Beyond accurate keypoints estimation, we further introduce a pipeline that enables manipulation of 3D objects generated by the diffusion model. Experimental results on diverse aspects and datasets, including Human3.6M, Stanford Dogs, and several in-the-wild and out-of-domain datasets, highlight the effectiveness of our method in terms of accuracy, generalization, and its ability to enable manipulation of 3D objects generated by the diffusion model from a single image.
KeyDiff3D enables 3D keypoint prediction and object manipulation from a single image using multi-view diffusion priors.
It generalizes effectively to in-the-wild and out-of-domain scenarios across diverse categories, including both human and animal domains.
Our predicted 3D keypoints enable manipulation of reconstructed 3D objects generated from a single image.
Diffusion Feature Aggregation: Given a single input image, we extract intermediate multi-view features from a pretrained diffusion model and aggregate them into a compact, geometry-aware feature representation.
3D Keypoint Extraction: These multi-view features are lifted into a 3D volumetric space via unprojection, and a 3D convolutional network predicts per-keypoint heatmaps to estimate the final 3D coordinates.
Self-supervised Training: The predicted 3D keypoints are projected onto generated views to construct structural edge maps, which are used as inputs to a reconstruction network trained with self-supervised losses, enabling keypoint learning without ground-truth 3D annotations.
Human 3.6M (train/test)
Human 3.6M (train) → DAVIS (test)
Stanford Dogs (train / test)
Stanford Dogs (train) →AP-10K (test)
@misc{jeon2025unsupervisedmonocular3dkeypoint,
title={Unsupervised Monocular 3D Keypoint Discovery from Multi-View Diffusion Priors},
author={Subin Jeon and In Cho and Junyoung Hong and Seon Joo Kim},
year={2025},
eprint={2507.12336},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.12336},
}