Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose Estimation

CVPR 2022

Jogendra Nath Kundu Siddharth Seth* Pradyumna YM* Varun Jampani

Anirban Chakraborty R. Venkatesh Babu

Indian Institute of Science, Bengaluru Google Research

Abstract

The advances in monocular 3D human pose estimation are dominated by supervised techniques that require large-scale 2D/3D pose annotations. Such methods often behave erratically in the absence of any provision to discard unfamiliar out-of-distribution data. To this end, we cast the 3D human pose learning as an unsupervised domain adaptation problem. We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations; a) model-free joint localization and b) model-based parametric regression. Such a design allows us to derive suitable measures to quantify prediction uncertainty at both pose and joint level granularity. While supervising only on labeled synthetic samples, the adaptation process aims to minimize the uncertainty for the unlabeled target images while maximizing the same for an extreme out-of-distribution dataset (backgrounds). Alongside synthetic-to-real 3D pose adaptation, the joint-uncertainties allow expanding the adaptation to work on in-the-wild images even in the presence of occlusion and truncation scenarios. We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.

Code Setup and Training

  1. Requirements

This codebase was created for and tested on Python 3.8. To install the requirements, please create a virtual environment and install the packages:

python -m venv env

source env/bin/activate

pip install -r requirements.txt

  1. Code Structure

The project is organized into the below folders:

  • source_pretrain consists of the codebase used to train MRP-Net on the source dataset.

  • target_adaptation consists of the codebase used to perform unsupervised adaptation to an unlabeled target.

  1. Training Procedure

The target adaptation requires an uncertainty-aware network pretrained on a labeled source dataset. To train on a labeled source dataset, please use the commands below:

cd source_pretrain

python train.py

The adaptation procedure can be applied to a target dataset by copying the source trained model to the log_dir folder of the target_adaptation directory, and by running the train.py file in the target adaptation directory.

cd target_adaptation

python pose_adaptation.py

For further details, please refer to the README.md file in the repository.

Citation

If you find our work helpful in your research, please cite our work:

@inproceedings{kundu2022mrc,

title={Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose Estimation},

author={Kundu, Jogendra Nath and Seth, Siddharth and YM, Pradyumna and Jampani, Varun and Chakraborty, Anirban and Babu, R Venkatesh},

booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},

year={2022}

}

License

This project is licensed under an [MIT License].

Contact

If you have any queries, please get in touch via email : jogendranathkundu@gmail.com.