4th Annual Workshop on 3D Content Creation for Simulated Training in eXtended Reality

(TrainingXR)

Date: March 24 (Friday), 2023, 8:30pm-11:30pm Eastern Standard Time (Virtual Online Workshop)

March 25 (Saturday), 2023, 8:30am-11:30am Shanghai time

Overview:

This workshop discusses and articulates research visions on using the latest extended reality (VR/AR/MR) technologies for education and training purposes, and on creating immersive 3D virtual content for delivering effective and personalized training experiences. This workshop will gather researchers and practitioners in a variety of computer disciplines related to XR training and content creation. This workshop will accept research papers on these topics. We will also invite renowned speakers from the research community and the industry to give talks related to XR-based training to inspire the field to further explore this promising direction. 

Schedule: (note: the time is in EST)

Introduction by the Organizers (8:30pm - 8:40pm)

Keynote Talk by George Papagiannakis (8:45pm - 9:30pm)

"From Low-Code Geometric Algebra to No-Code Geometric Deep Learning: Computational Models, Simulation Algorithms, and Authoring Platforms for Immersive Scientific Visualization, Experiential Visual Analytics, and the Upcoming Educational Metaverse"

BREAK (9:30pm - 9:40pm)

Keynote Talk by Karthik Ramani (9:40pm - 10:25pm)

"Augmenting Human Skills - Competency, Training, and Personalized Learning @Scale"

BREAK (10:25pm - 10:30pm)

Workshop Papers Presentation Session

Keynote Speakers:

Karthik Ramani

Purdue University

Talk: Augmenting Human Skills - Competency, Training, and Personalized Learning @Scale

Abstract:

The convergence of many factors has resulted in a machine that shares our viewpoint, context, and embodied interactions moment-to-moment for the first time in human history. In this talk, I will describe three themes that demonstrate the potential to augment human experiences for skills training and learning. The resulting embodied applications can have a vast impact in many areas such as improving manufacturing productivity, immersive educational experiences, hands-on remote learning, and surgery. 

First, I will discuss how better “skills-understanding” can be achieved by making it more visible through computer vision and artificial intelligence (AI). I demonstrate the key role of computer vision for XR in embodied interactions. GesturAR develops and enables users to author new types of interactions with objects and surroundings through mapping actions to intentions. CaptuAR enables human-AI augmentation through AR for capturing interactions via annotating hands-and-objects and AdaptutAR an adaptive AI-based AR avatar for a hands-on tutoring system will be shown.

Second, I will describe several AR “skills-authoring” systems, EditAR, a unified digital twin authoring and editing environment to create XR through a single demonstration; ProcessAR an authoring tool to create asynchronous procedural AR-VR instructions; CaptuAR to create context-aware AR; and InstruMentAR, a system that automatically generates AR tutorials through recording user demonstrations, will be discussed.

Third, I will provide frameworks for hands-on “skills-learning”. A smart phone based robot that can facilitate AR-based remote hands-on learning and remote tangible augmented laboratories are discussed. Also a VR-based trainer for up-skilling for scaling welding training using visuo-haptic interfaces; MechAR: a collection of sensing and actuation modules that enables physical toys to interact bi-directionally with AR; and LearnIoTVR: an end-to-end VR environment providing authentic learning experiences for Internet-of-Things are shown.

With such possibilities, human work can be more productive and agile by using cognitively intuitive and spatially aware interfaces that can increase human capacity, aid workforce reskilling programs, increase human labor and factory productivity, and agility. The talk concludes by summarizing how in the long-term the bigger gains come from complementing - not replacing - humans and making it possible to create value in new ways.

George Papagiannakis

University of Crete

ORamaVR

Talk: From Low-Code Geometric Algebra to No-Code Geometric Deep Learning: Computational Models, Simulation Algorithms, and Authoring Platforms for Immersive Scientific Visualization, Experiential Visual Analytics, and the Upcoming Educational Metaverse

Abstract:

More than 1 billion jobs, almost one-third of all jobs worldwide, are likely to be transformed by technology in the next decade, according to OECD and World Economic Forum estimates. In addition, 5 billion people today lack access to proper surgical and anesthesia care, due to the limited number of health professionals entering the workforce, as a direct result of the lack of innovation in medical training over the last 150 years. 

This growing need for continuous upskill and reskill, becomes even more critical in the post COVID-19 pandemic era. Extended Reality (XR) together with 5G spatial computing enabling technologies can pose as the next final frontier, regarding psychomotor/cognitive training and education content creation. XR can provide the means for qualitative hands-on education (knowledge) and training (skills), using affordable technology with on-demand, immersive scientific visualization techniques coupled with personalized, experiential visual analytics. As the expectations of the upcoming educational metaverse are rising, we review in this talk fundamental analytic as well as neural geometric computational models that are powering latest low-code as well as no-code content creation tools. Geometric algebra-based graphics character animation and rendering algorithms, Entity-Component-System scene graphs and graph neural networks are poised to make the difference in the evolution of experiential educational Metaverse applications.

Organizers:

Lap-Fai (Craig) Yu

George Mason University

Christos Mousas

Purdue University

Ryan McMahan

University of Central Florida

Konstantinos Koumaditis

Aarhus University

Call for Papers: 

This workshop aims to bring together researchers and experts in AR/VR, computer graphics, computer vision, robotics, and artificial intelligence, to discuss the research challenges in creating virtual training experiences to be delivered through state-of-the-art VR/AR/MR technologies. Specific topics of interest include, but are not limited to:

● 3D content authoring for XR training

● procedural modeling of virtual environments

● affordance analysis and physics-based reasoning of 3D scenes and objects

● cognitive, perceptual and behavioral modeling of virtual humans

● virtual human interaction and human perception

● collaborative and networked virtual training environments

● crowd simulation for VR training

● sound simulation for VR training

● physics simulation for VR training

● serious games in XR

● instructional design and personalization for XR training

● case studies of applying VR/AR to training and education

● haptics for XR training

Submissions: Project and Research papers (4-8 pages, VGTC format) submitted via the PCS.

Submission Link:  Submit your paper via the PCS submission page (choose Society="VR", Conference/Journal="IEEE VR 2023", Track="IEEE VR 2023 Workshop: 3D Content Creation for Sim. Training (TrainingXR)")

All accepted papers will be included in the IEEE Xplore Digital Library.

Important Dates: 

Submission deadline: January 12, 2023

Result notification: January 20, 2023

Camera-ready submission: January 27, 2023