Augmented Reality for Construction


Recent advances in computer interface design, and the ever increasing power and miniaturization of computer hardware, have combined to make the use of augmented reality possible in demonstration testbeds for building construction, maintenance and renovation. In the spirit of the first see-through head-mounted display developed by Sutherland (Sutherland, 1968), we and other researchers (e.g., Robinett, 1992; Caudell & Mizell, 1992; Bajura & Neumann, 1995) use the term augmented reality to refer to enrichment of the real world with a complementary virtual world. The augmented reality systems we are developing employ a see-through head-worn display that overlays graphics and sound on a person's naturally occurring sight and hearing. By tracking users and objects in space, these systems provide users with visual information that is tied to the physical environment. Unlike most virtual realities, whose virtual worlds replace the real world, augmented reality systems enhance the real world by superposing information onto it. The spatial tracking capabilities of our augmented reality systems distinguish them from the heads-up displays featured in some wearable computer systems (Quinn 1993, Patents 1994, Smailagic and Siewiorek 1994).

As part of a program aimed at developing a variety of high-performance user interfaces, we have developed a testbed augmented reality system that addresses spaceframe construction (Webster et al. 1996). Spaceframes are typically made from a large number of components of similar size and shape (typically cylindrical struts and spherical nodes). Although the exterior dimensions of all the members may be identical, the forces they carry, and therefore their inner diameters, vary with their position in the structure. Consequently it is relatively easy to assemble pieces in the wrong position-which if undetected could lead to structural failure. Our augmented reality construction system is designed to guide workers through the assembly of a spaceframe structure, to ensure that each member is properly placed and fastened.

System Description

Figure 1. Spaceframe used in our demonstration testbed.

Our prototype spaceframe structure, shown in Figure 1, is a diamond shaped, full-scale aluminum system manufactured by Starnet International (Starnet 1995). We have created a 3D computer model of the spaceframe, an ordered list of assembly steps, and a digitized set of audio files containing instructions for each step. Undergraduate Computer Science, Engineering and Architecture students helped develop the testbed as part of an NSF-sponsored educational grant in conjunction with the Columbia University's Teachers College. 

Figure 2. Headworn display with optical tracker.

The system's head-worn display is aVirtual I/O see-through stereoscopic color display with integral headphones and orientation tracker (Figure 2). Position tracking is provided by an Origin Instruments DynaSight optical radar tracker, which tracks small LED targets on the head-worn display. The user interface also includes a hand-held barcode reader, which has an optical target mounted on it, and is also tracked by the DynaSight. 

The spaceframe is assembled one component (strut or node) at a time. For each step of construction, the augmented reality system:

  • Directs the worker to a pile of parts and tells her which part to pick up. This is currently done by displaying textual instructions and playing a sound file containing verbal instructions.
  • Confirms that she has the correct piece. This is done by having her scan a barcode on the component.
  • Directs her to install the component. A 3D virtual image of the component indicates where to install the component (Figures 3-5), and verbal instructions played from a sound file explain how to install it (Figure 6).
  • Verifies that the component is installed by asking her to scan the component with the tracked barcode scanner (Figure 7). This checks both the identity and position of the part.
Figure 3. Real world. User's view through the see-through head-worn display of the real world, showing two struts and a node without any overlaid graphics. 

Figure 4 Virtual world. User's view of the virtual world intended to overlay the view of the real world shown in Figure 3. This image was captured with the real world view blocked. It shows a virtual strut and textual instructions that explain how to install it. 

Figure 5. Real + virtual worlds. User's view through the see-through head-worn display showing the real world of Figure 3 combined with the virtual world of Figure 4. In this image the user sees two real struts and a real node overlaid with a virtual strut (top) and textual instructions that instruct the user to install the strut in the location shown. 

Figure 6. The user installs the strut requested in Figure 5. 

Figure 7. The user scans a newly installed part (in this case, a node) with the tracked barcode scanner. 

The software infrastructure in which our prototype is built is COTERIE, a multi-platform, distributed system that has been designed to allow a potentially large number of users to interact in a shared environment (MacIntyre and Feiner 1996). It runs on an assortment of hardware under UNIX, Windows NT, and Windows 95. The majority of the infrastructure was written in Modula-3, a compiled language that is well-suited for building large distributed systems. The remainder of the infrastructure, and the majority of its applications (such as the spaceframe prototype), are written inObliq, an interpreted language that is tightly integrated with Modula-3. Application programmers are free to use either language or both. For this application we use shaded, hidden-surface-removed 3D graphics running in software under Criterion RenderWare because of the relative simplicity of the models we are rendering. We provide support for 2D applications using the native window system. This allows us to display X11 application windows on all platforms and native Windows NT/95 windows on Microsoft platforms.

The original version of our demonstration included support for only a single user (the construction worker). We are currently modifying the system to allow two additional users wearing see-through head-worn displays to each see a site supervisor's view of the construction process. Each of these additional users will see the construction worker and additional parts of the building, along with information about construction progress, current tasks, and parts availability.

Images from a Live Demonstration

The following images were taken at a live demonstration of our system at the American Society of Civil Engineers Third Congress for Computing in Civil Engineering, Anaheim, CA, June 17-19, 1996.

Figure 8. The spaceframe demonstration testbed set up at the Disneyland Hotel, Anaheim, CA. 

Figure 9. A user scans a newly installed node to verify its identity and placement. 

Figure 10. A user finds the next strut to be installed and verifies its identity with the scanner. 

Figure 11. A user installs a strut. 


This research is supported in part by the Office of Naval Research under Contract N00014-94-1-0564; NSF Gateway Engineering Coalition under NSF Grant EEC-9444246; the Columbia University CAT in High Performance Computing and Communications in Healthcare, a NY State Center for Advanced Technology supported by the NY State Science and Technology Foundation; the Columbia Center for Telecommunications Research under NSF Grant ECD-88-1111; and NSF Grant CDA-92-23009. Starnet International Inc. provided the spaceframe. The undergraduates involved in the development of the spacefame prototype were Rod Freeman (Mechanical Engineering), Jenny Wu (Architecture) and Melvin Lew (Computer Science).