CodeNeRF : Disentangled Neural Radiance Fields for Object Categories

ICCV 2021

Wonbong Jang Lourdes Agapito

University College London


Paper Code

Abstract

CodeNeRF is an implicit 3D neural representation that learns the variation of object shapes and textures across a category and can be trained, from a set of posed images, to synthesize novel views of unseen objects. Unlike the original NeRF, which is scene specific, CodeNeRF learns to disentangle shape and texture by learning separate embeddings. At test time, given a single unposed image of an unseen object, CodeNeRF jointly estimates camera viewpoint, and shape and appearance codes via optimization. Unseen objects can be reconstructed from a single image, and then rendered from new viewpoints or their shape and texture edited by varying the latent codes. We conduct experiments on the SRN benchmark, which show that CodeNeRF generalises well to unseen objects and achieves on-par performance with methods that require known camera pose at test time. Our results on real-world images demonstrate that CodeNeRF can bridge the sim-to-real gap.

Qualitative comparisons with PixelNeRF (ShapeNet-SRN single object)

CodeNeRF achieves shape and texture disentanglement maintaining geometric and appearance consistency in occluded regions., while PixelNeRF renders sharper images when the target view is close to the input view.

Latent Space Interpolation

Left / Right Up : Target Texture / Shape

Left Down : Keep shape and change texture

Right Down : Keep texture and change shape

Lifting 3D mesh

After the optimization, the centre of all voxels is fed into the network to get the occupancy density. The voxel vertices are from the marching cube based on the occupancy density.

Real Car / Chairs

After training for ShapeNet-SRN Cars and Chairs, both latent vectors are optimized for (pre-processed) real cars(Stanford-Cars) and chairs (Pix3D). Then, we do novel view synthesis and editing.

Citation / Bibtex

@InProceedings{jang2021codenerf,

author = {Jang, Wonbong and Agapito, Lourdes},

title = {CodeNeRF: Disentangled Neural Radiance Fields for Object Categories},

booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},

month = {October},

year = {2021},

pages = {12949-12958}

}

Acknowledgement

Research presented here has been supported by funding from Cisco to the UCL AI Centre.