主題演講

Keynote Lectures

主題演講一 

Lessons Learned in Near-Field

Interactions with Virtual Humans, 

Affordance and Perception-Action, and

3D Interaction for Training and Education

in MR 

Professor Sabarish Babu

Clemson University, USA

In this keynote, I will first discuss lessons learned from a body of research on emotion contagion and visual attention in human-virtual human interaction. In a virtual human simulation designed to educate nurses in recognizing the signs and symptoms of rapid deterioration, we investigated the effects of animation, appearance and interaction fidelity on the emotional reactions and visual attention behaviors of trainees in simulated dyadic and crowd scenarios. Our results and lessons learned have implications to the design of virtual humans in inter-personal simulations for personal space education. Next, I will discuss a body of work investigating dimensional symmetry and interaction fidelity continuum on near-field fine motor skills training in VR for technical skills education in the aviation and automotive curriculum. We designed and evaluated the effects of interaction, dimensional and system fidelity in near-field virtual reality simulations for motor skills training and education, in domains such as precision metrology and mechanical skills acquisition. Finally, I will discuss our some of our key findings in static and dynamic affordances in VR and MR, as well as perception-action coordination research with implications for near and medium field psychomotor skills training and education. I will end the talk summarizing our contributions, and highlight the key takeaways and recommendations for the design of virtual human and mixed reality simulations for near-field fine motor skills training and inter-personal skills education.

講者簡歷

Sabarish “Sab” Babu is an Associate Professor in the Division of Human Centered Computing in the School of Computing at Clemson University in the USA. He received his BS (2000), MS(2002) and PhD (2007) degrees from the University of North Carolina at Charlotte, and completed a Post-Doctoral Fellowship in the Department of Computer Science at the University of Iowa, prior to joining Clemson University in 2010. His research interests are in the areas of virtual environments, applied perception in VR/AR, virtual humans/crowds, educational virtual reality, and 3D human computer interaction. He has authored or co-authored over 120 peer- reviewed journal and conference paper publications in these areas of research. He was the General Chair of the IEEE International Conference on Virtual Reality and 3D User Interfaces (IEEE VR) 2016. He also served as a Program Chair for IEEE VR 2017. He has served as guest editor of IEEE TVCG and is on the editorial board of journals including MDPI Virtual Worlds. He and his students have received 8 Best Paper Awards in top IEEE and ACM research venues including the IEEE International Conference on Virtual Reality (2018, 2023), IEEE International Conference on 3D User Interfaces (2007, 2016), ACM Symposium on Applied Perception (2016, 2020, 2022), the IEEE International Conference on Healthcare Informatics (2013), a Best Presentation Award (ACM SAP 2021), and several honorable mentions for best paper awards. His research has been sponsored by the US National Science Foundation, US Department of Labor, Adobe Research Foundation, Bon Secours Healthcare Foundation, Prisma Health Foundation, and Medline Medical Foundation.

主題演講二

 

    Differentiable Visual Computing

 

   

 

Professor Tzu-Mao Li(李子懋)

University of California San Diego, USA

While neural networks have become powerful tools for processing visual data, their generality raises several challenges. Firstly, most modern architectures work on 2D, and it is difficult to embed 3D knowledge. Secondly,  neural networks are by design over-parametrized and have millions or billions of parameters. It is challenging to make the networks run fast for high-resolution images and videos on mobile devices. Finally,  neural networks are difficult to debug and control as their behaviors are mostly governed by their parameters and the training data. On the other hand, classical visual computing algorithms that explicitly model the computation are less impacted by these issues. Still, they often do not apply as broadly as modern data-driven methods. A major focus of our research is to connect classical graphics algorithms with modern data-driven methods, by making graphics algorithms differentiable to enable optimization and inference. Making graphics algorithms differentiable lead to new challenges: How do we derive the correct derivatives in the first place, when there can be discontinuities and boundary conditions involved? How do we efficiently compute the derivatives? How do we build systems to make derivation and implementation of differentiation easier? I will talk about our recent efforts into addressing these challenges. These research efforts include contributions in fields of forward/inverse rendering, image processing, physical simulation, and programming languages/systems.

講者簡歷

Tzu-Mao Li is an assistant professor at the CSE department of University of California, San Diego. He is a member of the Center for Visual Computing at UCSD. His research explores the connections between visual computing algorithms and modern data-driven methods and develops programming languages and systems for facilitating the exploration. He did a 2-year postdoc with Jonathan Ragan-Kelley at both MIT CSAIL and UC Berkeley. He did his Ph.D. in the computer graphics group at MIT CSAIL, advised by Frédo Durand. He received his B.S. and M.S. degrees in computer science and information engineering from National Taiwan University in 2011 and 2013, respectively, where I worked with Yung-Yu Chuang at the Communication and Multimedia Lab. His Ph.D. thesis "Differentiable Visual Computing" has received the  ACM SIGGRAPH 2020 Outstanding Doctoral Dissertation Award. He also received the NSF CAREER Award in 2023.