All locations of historical significance are at risk of being vandalized. If these locations existed in an approximated mirror digital world they could never truly be forgotten or destroyed. In order to mitigate this loss of experience we provide a way to simulate traveling to these environments digitally by using Remote scanning , 3D Procedural Software, Game engines, and VR technologies. We are able to reconstruct and build immersive interactions to be used in cultural heritage preservation, archeological research, and education. In the field data acquisition stage we collect photographs and scans of objects and environments, then a series of post-processing tricks such as unwrapping a 3D object into 2D space, and baking the lighting interactions of complex surfaces are required for real-time compatibility. We've assembled a 3D Automation Toolkit that focuses on transforming point cloud data into optimized 3D Assets using SideFX: Houdini. This would allow anyone to create 3D assets on a higher-level without digging deeper into details that take away from the art creation process. The 3D assets produced can be imported into the Unreal Engine Model Viewer to simulate lighting and render out images for archeological publications. Additionally, they are also used as environmental art for our virtual 3D interactable tour system of the El Zotz archaeological site in Guatemala using the Unity game engine. Together these different systems provide a platform for newcomers to quickly become citizen scientists that can contribute to virtually preserving cultural heritage in an ever changing environment.
Digital Documentation of excavation M7-1 at El Zotz, Guatemala has been done over several field seasons using Terrestrial LiDAR. In 2014 the Faro Focus 3D 120 was used to generate a precise 3D model to aid Archaeologist in tunnel mapping. In 2016, structure M7-1 was excavated even further, and a new Point Cloud was acquired to merge with previous data. In 2019, the Lieca BLK360 imaging laser scanner was used to append newly excavated tunnels to the Point Clouds previously acquired, and so on. Additionally, during these field expeditions Structure from Motion (SfM) was performed to obtain color information from key areas and Physically-Based Material (PBR) samples of the different environmental components. The reconstructed geometry generated straight from these point clouds can supplement photo-realistic visualizations, but they are not ideal for the plethora of devices that we commonly interact with through an app, website, or game console. Thus an optimization procedure is required to reduce the complexity of these models and make them more compatible with consumer electronics. This work has traditionally been done by skilled artists that must manually retopologize, or "simplify" the complexity of the vertices and edges comprising the 3D geometry among other tasks. We pursued a method that would allow us to feed in a series of images, dense point clouds or meshes, and then transform this data into game-ready assets ready for archeological visualizations.
Terrestrial Light Detection and Ranging (LiDAR)
The process for Terrestrial LiDAR Scanning (TLS) generally involves setting the desired scan density, which in affect increases the number of points collected and time needed per scan. In the case of the Lieca BLK360 TLS, medium quality was sufficient for our purposes and took 3 minutes per scan. Each scan occurred more than 1 meter apart depending on if all the environment's geometry could be reached from the previous scans location. The set of individual point clouds obtained was then registered using visual alignment in Lieca's propriety software Cyclone Register 360. The output of all this is several standard file format .ptx files all in alignment with each other, which can then be brought to external software for further 3D point cloud and mesh processing.
Structure from Motion (SfM)
SfM point cloud derived models are able to store accurate RGB data at a higher resolution, but with lower geometric accuracy. Since RGB information collected during SfM have greater accuracy, they can be useful in projecting color information back onto the Mesh generated by LIDAR. We can do Fine registration with the Iterative Closet Point (ICP) algorithm on the 2 Models to share the same coordinate space. Performing SfM on larger environments can be time consuming in collecting and processing data. In addition to this are the extra constraints of consistency in scene lighting, and obtaining enough coverage and overlap between photos of the entire scene. Thus we conceptually decompose an environment into it's most abundant components and perform SfM on these components under controlled lighting conditions for the purpose of high-detail surface scanning applicable toward environmental texturing. The m7-1 excavation was classified into 2 materials, it's general structure component of limestone stucco and the natural elements comprising the walls and ceiling of the excavation. This surface information can then be input into newer high-level development tools that utilize machine learning to upscale and synthesis material variations.
Aerial Light Detection and Ranging (LiDAR)
The Foundation for Maya Cultural and Natural Heritage (PACUNAM) funded a LiDAR initiative that surveyed a portion of the Guatemalan jungle including the El Zotz region of Protected Biotope San Miguel La Palotada. The LiDAR rapidly emits pulses of light that transmit through multiple surface intersections. Each intersection returns a signal that captures it's Time-of-Flight and from that a 3-dimensional coordinate and classification can be assigned to each point. The ground points can be extracted to reveal the terrain which can then be used to generate an accurate landscape than can serve as a canvas to align TLS and SfM scans to capture a broader environment.
CloudCompare is an open source 3D point cloud and mesh processing software. Within the application are various tools to clean, project, align, subsample, measure, and perform statistical analysis. Once the software is ready you can drag and drop your set of standard 3d files (.obj, .fbx, .las, .xyz, .ply, etc) into the main window. This can take several minutes depending on your hardware specifications. After the loading process has been completed the sequence for preparing the point cloud for meshing can take on many variations. The Statistical Outlier Removal (SOR) filter is used to remove points that deviate from the average distances of its nearest neighbors. Overlapping points and areas of unnecessary high density can be addressed by performing a subsample operation on the data set. M7-1 had approximately 30 Meters of newly excavated features scanned using the BLK360 that needed to be appended onto the original model. To do so 'Fine Registration' using ICP (Iterative Closest Point) algorithm was done in CloudCompare over a subset of the two point clouds. This process also returns a transformation matrix which is then applied to the whole point cloud. Merging the exterior of M7-1 with the interior point cloud cluster requires overlap with the two point cloud clusters. Additional TLS work was done around the radius of the pyramid in an attempt to capture a digital elevation model (DEM) of the nearby terrain. A Cloth simulation filter (CSF) technique can be used to extract ground points from point clouds.
Meshroom is an open-source 3D Reconstruction software based on the AliceVision Framework that provides Natural Feature Extraction, Image Matching, Features Matching, Structure from Motion, Depth maps estimation, and Meshing. Additionally, the AliceVision Plugin for Houdini provides this functionality inside the 3D Procedural Software. Using Meshroom can be as simple as dragging your set of overlapping images into the application and pressing Play to reconstruct a 3D Mesh from photographs. We used Meshroom to produce our photogrammetry assets that are then sent to Houdini for optimization.
Houdini is 3D procedural software that provides a node-based workflow to define a recipe of transformations. These workflows can then be easily shared across different projects. In our workflow are nodes that provide an iterative interface to tweak and send photogrammetry meshes through common modeling tasks such as Retopology, UV Mapping, Baking, and Texturing. The software is also used to generate a landscape based off LiDAR data and provide splat maps to detail the terrain with appropriate materials.
Unity is used to support El Zotz Maya Archeology Quest VR. The application was designed for mobile VR headsets accessed through a museum exhibition or digital download.
Unreal 4 is used to support our Model Viewer publication tool that archeologist can use to render out meshed reconstructions of field data collected and provide visuals for outreach.
Unreal 5 coming soon provides a novel solution for handling seemingly infinite geometry through a "virtualized micropolygon geometry system" called Nanite. It will also provide a new World Partitioning tool to handle automatic optimization of massive terrains. Future iterations of this project will attempt to push this technology to it's limits by importing the raw scans from the archeological site and procedurally generating a huge surrounding forest environment from the aerial lidar survey.
The Oculus Quest was chosen as an all-in-one solution for providing access to these experiences anywhere. The console includes room-scale tracking and 2 wireless motion controllers with 6-DOF or Hand-tracking capability to interact with the virtual world.
The HTC Vive was previously used for VR prototypes tethered to a powerful desktop computer.