ACM Transactions on Graphics 2016
Figure 1: An application of our building model. Starting from a single image (Input), a user can specify parts of a building mass model and mark shapes (windows and doors) on the observed part of a building facade in a rectified image (Element Selection). Our framework can complete the missing parts of the building (mass model and facades). A 3D rendering of the completed building is shown on the bottom.
We propose a new framework to model the exterior of residential buildings. The main goal of our work is to design a model that can be learned from data that is observable from the outside of a building and that can be trained with widely available data such as aerial images and street view images. First, we propose a parametric model to describe the exterior of a building (with a varying number of parameters) and propose a set of attributes as a building representation with fixed dimensionality. Second, we propose a hierarchical graphical model with hidden variables to encode the relationships between building attributes and learn both the structure and parameters of the model from the database. Third, we propose optimization algorithms to generate three-dimensional models based on building attributes sampled from the graphical model. Finally, we demonstrate our framework by synthesizing new building models and completing partially observed building models from photographs.
Figure 2: Overview of our framework.
Figure 3: Six buildings synthesized using our algorithm.
Figure 4: Given an incomplete building model, our algorithm can generate multiple completions. Top row: photograph of the building, and two views of the incomplete building. Middle and bottom rows show three possible completions. For each completion, the front and back of the building are shown.
We would like to acknowledge the help of Yoshihiro Kobayashi and Christopher Grasso for rendering most of the images in this paper, the help of Tina Smith for video narration, and the help of Virginia Unkefer for proofreading. This publication is based upon work supported by the Office of Sponsored Research (OSR) under Award No. OCRF-2014-CRG3-62140401 and the KAUST Visual Computing Center.