Appearance Consensus Driven Self-Supervised Human Mesh Recovery
ECCV 2020 (Oral)
Jogendra N Kundu* Mugalodi Rakesh* Varun Jampani Rahul M V R. Venkatesh Babu
Indian Institute of Science Google Research
Long talk video
Overview video
Abstract
We present a self-supervised human mesh recovery framework to infer human pose and shape from monocular images in the absence of any paired supervision. Recent advances have shifted the interest towards directly regressing parameters of a parametric human model by super-vising them on large-scale datasets with 2D landmark annotations. This limits the generalizability of such approaches to operate on images from unlabeled wild environments. Acknowledging this we propose a novel appearance consensus-driven self-supervised objective. To effectively disentangle the foreground (FG) human we rely on image pairs depicting the same person (consistent FG) in varied pose and background (BG) which are obtained from unlabeled wild videos. The proposed FG appearance consistency objective makes use of a novel, differentiable colors-recovery module to obtain vertex colors without the need for any appearance network, via efficient realization of color-picking and reflectional symmetry. Furthermore, the resulting colored mesh prediction opens up the usage of our framework for a variety of appearance-related tasks beyond pose and shape estimation.
Citation
If you find our work helpful in your research, please cite our work:
@inproceedings{kundu_human_mesh,
title={Appearance Consensus Driven Self-Supervised Human Mesh Recovery},
author={Kundu, Jogendra Nath and Rakesh, Mugalodi and Jampani, Varun and Venkatesh, Rahul M and Babu, R. Venkatesh},
inproceedings={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2020}
}
Licence
This project is licenced under an MIT License.