Environment Map From A Single Image
Environment Map From A Single Image
People
Song Park, Junsuk Choe and Hyunjung Shim
Abstract
We propose a new approach to predicting the surrounding environment map from a single image of the scene using deep adversarial networks. Using the state-of-the-art model as a baseline network, we employ the adversarial loss and confidence masks for improving the quality of estimated environment maps. Based on the comparisons on the benchmark database, we show the effectiveness of the proposed approach for increasing the visual quality and stability of training.
Concept
Aim to restore sharp and realistic environment maps
Employ the adversarial loss and confidence masks for improving the quality of estimated environment maps
Overview
Results
The results of baseline model [1] and those of the proposed models. (a) Background images, (b) reflectance maps, (c) results of baseline model [1]. Cloud-like artifacts are observed, and generally blur. (d) Baseline with a confidence mask, (e) Baseline with an adversarial loss. Images in (e) are sharper than (c). (f) Baseline with both confidence mask and adversarial loss. Images in (f) are much improved with sharp and detail textures.
Fig.2 shows how the results varies upon different models. Each column represents different methods; baseline, baseline+mask, baseline+AL and baseline+mask + AL. We observe that our results present better visual quality than others, in terms of restoring the sharpness and color distributions of reference environment maps.
Fig. 2. Experimental results.