The aim of the ReDigiTS workshop is to attract a collection of high-quality submissions reporting research activities targeted to the next generation of immersive experiences, reporting the latest ideas, methodologies, applications, evaluations, case studies, prototype implementations, preliminary results and demos concerning 3D reconstruction, digital twinning, and simulation in VR and related technologies.
A non-exhaustive list of topics for the workshop is reported below. Multiple application domains can be envisaged, like, to name a few, industry, healthcare, cultural heritage, smart cities, education, etc. Moreover, VR-related technologies spanning the entire “virtuality continuum” can be considered as in the workshop’s scope.
Interactive simulations for immersive applications.
Methods and tools for virtual prototyping.
Telepresence systems for collaborative virtual tasks.
Algorithms and methods to simulate physical phenomena.
Physical 3D reconstructions for haptic experiences.
Virtual tours and navigation of reconstructed environments.
Distributed tools based on digital twinning and 3D reconstruction.
Presence, immersion, and user experience of immersive applications based on 3D reconstruction and simulation.
Human-machine interaction with reconstructed 3D contents and digital twins.
Testbed, protocols, and metrics for assessing the impact of 3D reconstructions, immersive simulations, and digital twins in VR experiences.
Technologies supporting 3D reconstruction, immersive simulation, and digital twinning.
Comparing immersive solutions that rely on 3D reconstructions, immersive simulations, and digital twinning.
Innovative solutions to overcome current technological limitations regarding the use of 3D reconstructions, immersive simulations, and digital twinning into immersive experiences.
Metaverse Architectures and Applications
System design and evaluation for immersive Metaverse experiences
ReDigiTS will be co-located with the IEEE Conference on Virtual Reality and 3D User Interfaces. The workshops will be held in hybrid mode on March 25, 2023.
Manuela Chessa (University of Genoa, Italy)
Title: Interacting in Extended Reality: perception and action from real to virtual
Abstract: Several interaction actions, like grasping, picking up objects, walking, or sitting on a chair, are performed in everyday life without too much effort and appreciable errors. Visual information is essential in the first steps of movements, e.g., when planning to grasp an object. Besides the common real world-experience, Virtual Reality (VR) systems are spread in many different contexts, e.g., for training, simulation, and digital twinning. Various forms of interaction are adopted to allow the users to act inside virtual environments (VEs) and manipulate objects. Solutions allowing natural, e.g., bare hands, interaction are still less robust than standard, e.g., controlled based, ones. Many factors affect the grasping actions of virtual objects: errors and inconsistencies in the tracking of the users’ fingers, thus in their replica inside the VE, the lack of tactile and haptic feedback, and the absence of friction and weight. Extended Reality (XR) and passive haptics, i.e., the combination of VR and real-world elements, appear in this context promising. The main challenge is maintaining the alignment between the virtual and real reference frames to keep the perceptual (visual) coherence of the XR environment. Then, in XR, it is possible to modify the visual aspect of real objects by preserving their physical properties but modulating and augmenting their visual aspect. Both natural and supernatural situations can be simulated, allowing the creation of novel interactive systems and the study of the interplay between visual perception and grasping actions.
Bio: Manuela Chessa is Associate Professor in Computer Science at Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering at the University of Genoa. Her research interests focus on developing natural human-machine interfaces based on virtual, augmented, and mixed reality, on the perceptual and cognitive aspects of interaction in VR and AR, on the development of bioinspired models, and the study of biological and artificial vision systems. She studies the use of novel sensing and 3D tracking technologies and visualization devices to develop natural and ecological interaction systems, always having human perception in mind. Recently, she addressed the coherent and natural combination of virtual reality and the real world to obtain robust and effective extended reality systems. She has been the Program chair of the HUCAPP International Conference on Human-Computer Interaction Theory and Applications. She has been the chair of the BMVA Technical Meeting – Vision for human-computer interaction and virtual reality systems, Lecturer of the tutorial Natural Human-Computer-Interaction in Virtual and Augmented Reality, VISIGRAPP 2017, and Lecturer for the tutorial Active Vision and Human Robot Collaboration at ICIAP 2017 and ICVS2019. She organized the first four editions of the tutorial CAIVARS at ISMAR 2018, 2020, 2021, and 2022. She is the author of more than 85 papers in international book chapters, journals, and conference proceedings and co-inventor of 3 patents.
Filippo Gabriele Pratticò, VR@POLITO, Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, Turin, Italy (Co-Proposer)
Alberto Cannavò, Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, Turin, Italy (Co-Proposer)
Bill Kapralos, Game Development and Interactive Media program and maxSIMhealth Lab, Ontario Tech University, Oshawa, Canada.
Sofia Seinfeld, Image Processing and Multimedia Technology Center, Universitat Politècnica de Catalunya-Barcelona Tech, Terrassa, Spain
Congyi Zhang, Department of Computer Science, the University of Hong Kong, Hong Kong