Mizzou CAVE
Mizzou CAVE
Mizzou CAVE is providing researchers and students with exciting opportunities to experience design, development, testing, and outreach of immersive VR environments in several multi-disciplinary projects:
Transportation Planning and Testing
Public works in transportation are facilities authorized, financed, and opened to the public at large. Such facilities are pervasive and include highways, bridges, airports, bikeways, walkways, and mass transit. Since such facilities are often open around the clock and impact the basic health and welfare of many people, field experimentation can be difficult and even unsafe. The CAVE provides a safe and effective environment for transportation engineers, who are conducting decision-making research affecting millions of people.
MU’s networked-virtual environment, ZouSim shown in Figure 1 can simulate driving, trucking, bicycling, wheeling, and walking, and has previously been used for various investigations dealing with transportation facilities such as design, operations, signing, markings, and technology. Thus, we can model how people travel day-to-day via different modes and make transportation decisions. We propose two research activities to utilize the capabilities of the combined CAVE and ZouSim virtual world in dynamic decision making, involving:
1) Disaster evacuation to provide many disaster relief researchers and system planners with a more fundamental understanding of human behavior under stress; and
2) Work zone safety to model the interaction of workers with the vehicle, roadway, and the environment and investigate decision making near work zones.
Design Prototyping for Built Infrastructure
VR extends the critical role of computer-generated 3D representations by allowing designers to experience their ideas in full-scale, making it easier to evaluate designs without mental translations of scale. The ongoing inter-disciplinary research at the the Immersive Visualization Lab (iLab, founded in 2010) involving design of built environments has two complementary strands:
1) Psychological aspects of virtual reality experience to investigate how VR systems induce a ‘sense of presence’ and improve spatial understanding of users; and
2) Virtual Prototyping of human-environment interactions to investigate virtual prototyping of complex human-environment interactions (e.g., in a surgery room) and performance assessment for environments under design.
Public Safety and Disaster Response
MizzouCAVE provides an ideal platform to simulate the complex scenarios during natural and man-made disasters where decisions must be made in real time by emergency and first responders. We propose to conduct research on two thrusts in this area:
1) 3D LIDAR-based Virtual Environment Study for Disaster Response Scenarios: Drawing from the fields of deep learning, computer graphics, computer vision, and 3D immersive visualization we propose to create a decision making system that takes in HD videos, streams the data to a server where a dynamic 3D model is constructed, and provides a virtual scene navigation program for viewing the videos in a 3D scene in the MizzouCAVE; and
2) Virtual Reality for Disaster Medical Triage Coordination: A CAVE based application is needed for delivering visual situational awareness during disaster for medical triage co-ordination. The CAVE based application will allow for efficient orchestration of video feeds and dynamic tracking and replenishment of medical supply levels, patient as well as responder status in order to prioritize the medical triage co-ordination improve the triage care levels during disasters.
Using MizzouCAVE, we will investigate advanced decision-making related algorithms for real-time, high fidelity, photorealistic point-based rendering, deep learning based robust image matching and camera calibration algorithm that are scalable to large scale datasets. The proposed CAVE and integrated MU HPC resources will help us evaluate application scalability issues relating to handling LIDAR models and sets of videos in a disaster situation, where: (i) many videos will be collected remotely, (ii) sent to the server for registration, and (iii) viewed on the CAVE in real-time for critical decision making.
Computational Biology and Bioinformatics
CAVE systems have been used extensively in computational biology and bioinformatics. Several bioinformatics centers in the nation have CAVE systems. The bioscience research studies that will benefit from the CAVE environment are: 3D Genome Structure and Biological Networks, Protein Structure Prediction, Protein Binding Site Prediction, and Plant Phenotype Analytics and Retrievals. The planned research experiments with the CAVE system amongst the bioscience researchers include:
1) Enhancements to Understanding of the 3D Genome Structure and Biological Networks by using MizzouCAVE for immersive visualization of structure and network models and thus providing a richer visualization of very large 3D genome models consisting of millions of nucleotides or very large biological networks consisting of thousands of molecules than traditional computer monitors;
2) Enhancements to Understanding of the Protein Structure Binding Site and Docking Prediction by using MizzouCAVE to visualize the 3D receptor structure (including the active site) and the ligand binding mode to facilitate facilitate the discovery of deeply-embedded binding groves that are difficult to view using traditional displays, and the comparison of changes of the binding site due to amino acid mutations;
3) Enhancements to Understanding of the Protein Structure Prediction by using MizzouCAVE to visualize the predicted protein structures and make changes of the protein structures interactively, thus facilitate more effective applications in learning and research related to biological and biomedical studies; and
4) Enhancements to Understanding of the Plants Phenotype Analytics and Retrievals by using MizzouCAVE to advance understanding of large-scale and complex visual phenotypes of plants.
Figure 1. ZouSim multimodal simulator to be integrated with the CAVE.
Figure 2. Built environment prototyping.
Figure 3. Integrating real-time video with 3D LIDAR points; Top Left: 2D video frame; Bottom Left: Video frame projected onto 3D LIDAR points without using our method. Right: 3D planes constructed for moving objects identified in video using our method
Figure 4: Interactive protein molecular structure navigation with Leap Motion and Unity3D.