Program

 Workshop Program 

The "3D Reconstruction, Digital Twinning, and Simulation for Virtual Experiences" (ReDigiTS-2024) workshop will be held on Sunday, March 17, 2024, 10:30am - 12:00am Orlando (FL, USA) time (UTC-4). Time converter 

Room: Fantasia A

10:30am - 10:45am

Opening 

10:45am - 11:45am

Session 1

Chair: Alberto Cannavò, Politecnico di Torino, Italy

OS-NeRF: Generalizable Novel View Synthesis for Occluded Open-Surgical Scenes

Mana Masuda, Graduate School of Science and Tehnology, Keio University, Yokohama, Kanagawa, Japan

Ryo Hachiuma, Graduate School of Science and Technology, Keio University, Yokohama, Japan

Prof. Hideo Saito, Graduate School of Science and Technology, Keio University, Yokohama, Kanagawa, Japan

Hiroki Kajita, Keio University School of Medicine, Tokyo, Japan

Yoshifumi Takatsume, School of Medicine, Keio University, Kanagawa, Japan

Abstract

Our approach utilizes a learning framework combined with NeRF representation to render novel views from limited source images in medical environments, specifically open surgical procedures. By incorporating automatic occlusion mask estimation to filter obscured pixels during training, we overcome challenges posed by occlusions and camera constraints in operating rooms. We validate our approach through comparisons using real-world surgical videos and demonstrate improved rendering quality and practical effectiveness in handling occlusions. 

Evaluation of 3D modeling techniques for the blending of real and virtual environments

Marianna Pizzo, University of Genoa, Genoa, Italy

Eros Viola, University of Genoa, Genoa, Italy

Fabio Solari, University of Genoa, Genoa, Italy

Manuela Chessa, University of Genoa, Genoa, Italy

Abstract

Interaction in immersive VR can be enhanced by merging physical objects with virtual stimuli, thus creating extended reality scenarios. Effective solutions rely on the correct detection and tracking of the 6DOF pose of the objects and the accurate 3D reconstruction of the physical objects. Here, we focus on the latter aspect, analyzing different 3D reconstruction techniques to create and segment the meshes necessary to merge the physical objects with virtual counterparts. We consider six 3D reconstruction software and six different furnished rooms, analyzing the time necessary to create the 3D mesh from scratch, the reconstruction success rate, and the percentage of volume shift between the real objects and the reconstructed ones. 

Digital Twin in Retail: An AI-Driven Multi-Modal Approach for Real-Time Product Recognition and 3D Store Reconstruction

Jingya Liu, Store Nº8, Redmond, Washington, United States

Issac Huang, Store Nº8, Redmond, Washington, United States

Aishwarya Anand, Store Nº8, Redmond, Washington, United States

Po-Hao Chang, Store Nº8, Redmond, Washington, United States

Yufang Huang, Store Nº8, Redmond, Washington, United States

Abstract

In large-scale retail environments, the use of a digital twin - a virtual model of a physical store - can enhance efficiency and decision-making. However, creating and maintaining these digital twins is labor-intensive due to manual data entry, product tracking, and discrepancies between blueprints and actual store layouts. This paper proposes an AI-driven solution for automated data collection and updating for digital twinning. The framework enables real-time product recognition and location detection using head-mounted displays and smartphones. An AR feature allows for immediate data verification. The system's effectiveness was confirmed with an iOS wayfinding app, achieving high user satisfaction and accuracy. 

Towards Adaptive AR Interfaces for Passengers of Autonomous Urban Air Mobility Vehicles: Analyzing the Impact of Flight Phases and Visibility Conditions on User Experience Through Simulation

Lorenzo Valente, Politecnico di Torino, Turin, Italy 

Filippo Gabriele Pratticò, VR@POLITO, Politecnico di Torino, Turin, Italy 

Marco Nobile, Politecnico di Torino, Turin, Italy 

Fabrizio Lamberti, Politecnico di Torino, Turin, Italy 

Abstract

This work compares four possible designs of Augmented Reality (AR) interfaces for passengers of an Autonomous Aerial Vehicle (AAV) envisioned as an air taxi in the Urban Air Mobility (UAM) context. The four designs were evaluated and compared through a video-based study considering two potentially influential factors: flight phases (namely takeoff, cruise, and landing) and visibility conditions (i.e. clear daylight, night, and foggy). Dimensions included in the analysis were perceived safety, anxiety, situational awareness, cognitive workload, trust, predictability, and preference. The results showed that preferred interface by the passengers may vary depending on the considered combination of the two factors. 

11:45 am - 12:00 pm 

Closure

All times are displayed in Orlando (FL, USA) Time (UTC-4