SeamlessNeRF: Stitching Part NeRFs with Gradient Propagation

Bingchen Gong*,  Yuehao Wang*,  Xiaoguang Han and Qi Dou

* Denotes Equal Contribution

SIGGRAPH Asia 2023

Abstract

Neural Radiance Fields (NeRFs) have emerged as promising digital mediums of 3D objects and scenes, sparking a surge in research to extend the editing capabilities in this domain. The task of seamless editing and merging of multiple NeRFs, resembling the "Poisson blending" in 2D image editing, remains a critical operation that is under-explored by existing work. To fill this gap, we propose SeamlessNeRF, a novel approach for seamless appearance blending of multiple NeRFs. In specific, we aim to optimize the appearance of a target radiance field in order to harmonize its merge with a source field. We propose a well-tailored optimization procedure for blending, which is constrained by 1) pinning the radiance color in the intersecting boundary area between the source and target fields and 2) maintaining the original gradient of the target. Extensive experiments validate that our approach can effectively propagate the source appearance from the boundary area to the entire target field through the gradients. To the best of our knowledge, SeamlessNeRF is the first work that introduces gradient-guided appearance editing to radiance fields, offering solutions for seamless stitching of 3D objects represented in NeRFs.

Method Overview

In this paper, we present SeamlessNeRF, a novel framework designed to facilitate gradient-based appearance blending of NeRFs. Given source and target objects represented in NeRFs, our key idea is to optimize the appearance of the target field to harmonize its texture style with the source field, thereby achieving a smooth transition between the source and target fields. To do this, we begin with transforming the source and target NeRFs into a unified homogeneous coordinate space, followed by constructing a piecewise radiance field based on their density values. During the optimization for appearance blending, the radiance of the source and target are reconciled in their intersecting boundary area. Concurrently, we impose regularization on the gradient field of the target to preserve its original appearance change. This enables the propagation of radiance color from the source to the target through the gradient field. In order to efficiently compute the radiance gradients in the unified 3D space, we resort to finite difference in implicit fields. Since radiance fields are view-dependent, we propose a sampling strategy to identify each point’s view direction. Furthermore, a side-branch fine-tuning scheme is incorporated into the appearance optimization procedure for faster convergence and preserving texture details.

Main Results

Showcase of our apperance blending results. In (a) and (b), we present reference images of source and target radiance fields, respectively. The column (c) shows results of direct merge of the source and target. The last three columns in (d) exhibit three views of the merged and appearance blended radiance fields.

Demo Video

Citation

If you find our work is helpful, please consider citing:


@InProceedings{gong2023seamlessnerf,

  title={SeamlessNeRF: Stitching Part NeRFs with Gradient Propagation},

  author={Gong, Bingchen and Wang, Yuehao and Han, Xiaoguang and Dou, Qi},

  booktitle={ACM SIGGRAPH Asia},

  year={2023}

}