At the cutting edge of research, 3D laser scanning, remote sensing, global positioning systems (GPS) and geographic
information systems (GIS), photogrammetry, and computer modeling have been used to collect and document data on
significant cultural heritage sites. Virtual reconstructions integrate the complex layers of archaeological, historical, and
cultural data and provide the tools for scholars to visualize, analyze, and test hypotheses on the data.
This project is collaboration between Teleimmersion Lab at University of California, Berkeley, and University of California,
Merced, to develop a collaborative application for digital archaeology. The application features distributed scene graph
which is built on top of the collaborative Vrui framework, developed by UC Davis. The scene graph is managed off the
central server which sends clients scene graph changes, 3D position of all users, and video and audio data for
communication. This server-based model allows for synchronized interaction in the virtual environment. The software
allows for visualization of different 3D objects (in wavefront format) in combination with GIS data.
The research questions we are interested in include the following: How do people interact with virtual characters and
virtual humans, and how does this affect learning in a virtual environment? Is attention sustained and memory more
robust for information about virtual historic objects (e.g., function of object, location of object) when virtual characters
point at objects while they describe the objects? How will users as avatars grasp and manipulate virtual objects, and
what are the cognitive benefits of this type of interaction? Two crucial aspects regard the role of “awareness” and “imitation”.