The recent digital revolution has provided new opportunities for visualizing, communicating, and disseminating data. Thanks to the ubiquity of smartphones, much of the population is already carrying around GPS-enabled computers in their pockets. These devices can be leveraged for providing location-based simulations, and digital content that is interactive and easily scalable. Our next-generation visualization approaches will provide the foundation for Thrust 2 – visualizing the implications of SLR for specific and general audiences.
SLR mobile app: This deliverable will be an all-in-one design, with the various VR/AR experiences available in a single app, albeit with specific access and features dictated by the type of user group/stakeholder during log in. These experiences will translate our quantitative information and modeling into a user-friendly and engaging narrative, to foster self-driven discovery and continuous exploration of our app. Potential future integration with the SLR board game will yield benefits of gamification, increasing user engagement and fulfillment of learning objectives.
The app interface and experiences will be built using the Unity game engine software and Vuforia SDK, using assets derived from Autodesk Maya for 3D modeling and animations. Other digital assets will be derived from various data sources and methodologies: geographic data will include high-resolution Digital Elevation Maps (DEMs) using ArcMap software suite, Google Maps, and photogrammetric 3D reconstruction as needed.
Our app will be deployed through the Apple and Android app stores, with the possibility of a web-based version deployed using WebGL (JavaScript) on our project site. Proof-of-concept Geographic Information System (GIS) maps and a DEM model of SLR projections in Tampa Bay in year 2100 (see Figure in Introduction) are shown in Figure below.
Virtual and Augmented Reality (VR/AR): The VR component of the app will rely on the cheap and relatively ubiquitous Google Cardboard (see Figure below). We will custom-print a number of these headsets that will incorporate our project website, QR code, and project / University / NSF logos. Users will be able to place their smartphone inside the headset, and interact with the SLR app and visualizations as a VR experience – such as an engaging virtual fly-through of the Tampa Bay area. Given that web-based AR is still in its infancy, we will integrate AR solely through stand-alone apps for two hardware platforms: consumer smartphones and the high-end Microsoft HoloLens (see Figure below). The former can integrate physical target images – such as our project logo or business card – which will generate a fixed AR rendering on the user’s phone through our app interface. This could be tied to the user’s location as well, for example providing a model simulation of their geographic area, with a time slider that will render the projected water levels. Additionally, the HoloLens represents immersive AR, as a computer headset that will visualize our simulations and assets using interactive holograms. Stakeholders will be able to wear the device, and navigate through content or virtual time using hand gestures, for example similarly overlaying a digital rendering of SLR onto their immediate surroundings.