Speakers

Prof. Manuela Chessa

Perceiving and Interacting in Extended Reality

Various forms of interaction are adopted to act inside the virtual environments and manipulate objects. Nevertheless, many factors affect interaction with virtual objects: errors and inconsistencies in the tracking, the lack of tactile and haptic feedback, the absence of friction and weight. Mixed Reality, i.e., the combination of VR and real-world elements, appears in this context challenging and promising.

Manuela Chessa is an Assistant Professor in Computer Science at the Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering at the University of Genoa, Italy. Her research interests are focused on the development of natural human-machine interfaces based on virtual, augmented and mixed reality, on the perceptual and cognitive aspects of interaction in VR and AR, on the development of bioinspired models, and on the study of biological and artificial vision systems. She studies the use of novel sensing and 3D tracking technologies and of visualization devices to develop natural and ecological interaction systems, always having in mind the human perception. Recently, she addressed the coherent and natural combination of virtual reality and real worlds, to obtain robust and effective extended reality systems. She has been Program chair of the HUCAPP International Conference on Human Computer Interaction Theory and Applications. She has been the chair of the BMVA Technical Meeting – Vision for human-computer interaction and virtual reality systems, Lecturer of the tutorial Natural Human-Computer-Interaction in Virtual and Augmented Reality, VISIGRAPP 2017, and Lecturer for the tutorial Active Vision and Human Robot Collaboration at ICIAP 2017 and ICVS2019. She organized the first four editions of the tutorial CAIVARS at ISMAR 2018, 2020, 2021 and 2022. She is author of more than 85 papers in international book chapters, journals and conference proceedings, and co-inventor of 3 patents.

Dr. Jeffrey Delmerico

Spatial Intelligence and the Industrial Metaverse

In the factories, warehouses, and construction sites of the near future, front line workers and robots will perform tasks side-by-side, as well as collaboratively. To enable these types of scenarios in industrial environments, both humans and robots will require large-scale mapping and localization capabilities. This talk will explore some of the ongoing research at Microsoft into providing this type of spatial intelligence to both human and robot agents through Mixed Reality.

Jeff Delmerico is a robotics research scientist with the Microsoft Mixed Reality and AI Lab in Zürich, Switzerland. His work is focused on bringing robots into Microsoft's Mixed Reality ecosystem, and enabling human-robot interaction and teaming through MR. He received his PhD in Computer Science from SUNY Buffalo in 2013, and previously worked as a postdoctoral researcher at the University of Zurich and the University of Hawaii at Manoa.

Prof. Mahdi Tavakoli

Improving User Situational Awareness and Performance in Surgical and Rehabilitation Settings via Augmented-Reality Displays

Mahdi Tavakoli is a Professor in the Dept. of Electrical and Computer Engineering at the University of Alberta, Canada. He received his BSc and MSc degrees in Electrical Engineering from Ferdowsi University and K.N. Toosi University, Iran, in 1996 and 1999, respectively. He received his PhD degree in Electrical and Computer Engineering from the University of Western Ontario, Canada, in 2005. In 2006, he was a post-doctoral researcher at Canadian Surgical Technologies and Advanced Robotics (CSTAR), Canada. In 2007-2008, he was an NSERC Post-Doctoral Fellow at Harvard University, USA. Dr. Tavakoli’s research interests broadly involve the areas of robotics and systems control. Specifically, his research focuses on haptics and teleoperation control, medical robotics, and image-guided surgery. Dr. Tavakoli is the lead author of Haptics for Teleoperated Surgical Robotic Systems (World Scientific, 2008). He is a Senior Member of IEEE and an Associate Editor for ​IEEE Robotics and Automation Letters, IEEE/ASME Transactions on Mechatronics Focused Section with Advanced Intelligent Mechatronics, Journal of Medical Robotics Research, IET Control Theory & Applications, and Mechatronics.

prof. STEVEn LAVALLE

From XR to Perception Engineering

Virtual reality (VR) technology has enormous potential to transform society by creating perceptual illusions that can uniquely enhance education, collaborative design, health care, and social interaction, all from a distance. Further benefits include highly immersive computer interfaces, data visualization, and storytelling. We propose in our research that VR and related fields can be reframed as perception engineering, in which the object being engineered is the perceptual illusion itself, and the physical devices that achieve it are auxiliary. This talk will report on our progress toward developing mathematical foundations that attempt to bring the human-centered sciences of perceptual psychology, neuroscience, and physiology closer to core engineering principles by viewing the design and delivery of illusions as a coupled dynamical system. The system is composed of two interacting entities: The organism and its environment, in which the former may be biological or even an engineered robot. Our vision is that the research community will one day have principled engineering approaches to design, simulation, prediction, and analysis of sustained, targeted perceptual experiences. It is hoped that this direction of research will offer valuable guidance and deeper insights into VR, robotics, and possibly the sciences that study perception.

Steven M. LaValle <website> is Professor of Computer Science and Engineering, in Particular Robotics and Virtual Reality, at the University of Oulu. From 2001 to 2018, he was a professor in the Department of Computer Science at the University of Illinois. He has also held positions at Stanford University and Iowa State University. His research interests include robotics, virtual and augmented reality, sensing, planning algorithms, computational geometry, and control theory. In research, he is mostly known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm, which is widely used in robotics and other engineering fields. In industry, he was an early founder and chief scientist of Oculus VR, acquired by Facebook in 2014, where he developed patented tracking technology for consumer virtual reality and led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration, health and safety, and the design of comfortable user experiences. From 2016 to 2017, he was Vice President and Chief Scientist of VR/AR/MR at Huawei Technologies, Ltd. He has authored the books Planning Algorithms, Sensing and Filtering, and Virtual Reality.

Prof. Lik-Hang Lee

Building User-centric Immersive Cities in the Metaverse Era

Designing immersive environments in city-wide urban scenarios has received rising attention since the metaverse went viral in 2021. Mobile Augmented Reality (MAR) enables users to receive highly diversified services and valuable information in urban areas. Nevertheless, the augmentation of our urban requires significant efforts of user-centric design in a blended virtual-physical reality. This talk will discuss several recent studies to elaborate on an immersive human-city interaction framework with recent examples of enhancing interactivity bandwidth with drones. Finally, the talk will be concluded with future directions of MAR in the metaverse era.

Lik-Hang Lee is an Assistant Professor (tenure-track) with the Korea Advanced Institute of Science and Technology (KAIST), South Korea, and heads the Augmented Reality and Media Laboratory, KAIST. He received a PhD degree from SyMLab, Hong Kong University of Science and Technology, and the Bachelor's and M.Phil. degrees from the University of Hong Kong. He has built and designed various human-centric computing technologies specializing in augmented and virtual realities (AR/VR).

Dr. Chang Liu

Advanced 3D Graphics for VR-based Robotic Teleoperation

This talk describes how Extend Robotics combines cutting edge real-time volumetric telepresence with interactive digital twin, to achieve intuitiveness for robotic teleoperation, allowing users to immersively visualise the remote workspace in 3D, while controlling the robot with natural gestures. This technology unlocks the next generation human robot interface, utilising only consumer VR equipment such as Meta Quest to operate a wide range of commercial robotics systems

Chang Liu is an entrepreneur in academia and high-tech start-ups. He is the founder, CEO, and Chief Designer of Extend Robotics. He was previously a research associate in Imperial College London and University of Southampton on autonomous aerial robot navigation.


Twitter: @extend_robotics

LinkedIn: https://www.linkedin.com/company/extend-robotics

Twitter: @enjoychang

LinkedIn: https://www.linkedin.com/in/thechangliu/

Dr. Toshiya Nakakura

Development and deployment of Telepresence systems in data centers

NTT Communications and Tokyo Robotics have been continuously researching and developing telepresence robots and are now in the process of commercializing them. Our system is capable of immersive operation in VR and has force feedback. We are developing this system for use in data center operations first. We have successfully demonstrated the opening and closing of data center racks and LAN cable wiring. This year, we are starting to offer our commercial system to some companies. In this talk, I will present examples of such cases.

Toshiya Nakakura is a Senior Researcher at NTT Communications. He is also a researcher at Keio University, where he led the team Synapse in the ANA AVATAR XPRIZE. His PhD thesis was on communication technologies for Telexistence, and current commercial developments in companies are based on the research. His system is based on WebRTC, a web standard technology. This is based on his desire to make the world better by popularizing technologies widely rather than enclosing them, and he also contributes to standardization activities such as introducing use cases in the robotics field at TPAC, a web standardization conference.

Prof. Mark Minor

Extended Reality and its Applications in the Multi-Sensory TreadPort Active Wind-Tunnel Virtual Reality Locomotion Interface


In this presentation we will discuss augmentation of the TreadPort to create an immersive world where users can experience environmental sensing, interact with terrain, and operate real world objects as the walk through the virtual world. The TreadPort is a cave automatic virtual environment (CAVE) where graphics are projected on the walls and floor; a large treadmill allows locomotion through the VR world. Various tethers provide haptic feedback that allows the user to experience inertial forces, terrain slope, body weight support, and impacts with objects in the virtual world. A wind tunnel constructed around the system to create the TreadPort Active Wind Tunnel (TPAWT) provides atmospheric display including wind, scent, and moisture. Smart Shoes allow users to experience fine features of the terrain shape they walk on the terrain. Large workspace haptic manipulators currently under development will allow users to interact with everyday objects in the virtual world using single and dual hand manipulation. This presentation will discuss application of the system to imitating tasks from daily living as well as application to gait therapy for Parkinson’s disease and stroke.

Mark Minor is currently an Associate Professor in Mechanical Engineering, University of Utah, Salt Lake City, where he has been a faculty member since 2000. He received the B.S. degree in mechanical engineering from the University of Michigan, Ann Arbor, in 1993, and the M.S. and Ph.D. degrees in mechanical engineering from Michigan State University, East Lansing, in 1996 and 2000, respectively. His research interests focus on design and control of robotic systems including mobile robots, rolling robots, climbing robots, aerial robots, autonomous ground vehicles, soft robots, wearable robots, and virtual reality systems.

Prof. Jason Corso

Toward a Thinking-Cap-like System that Can Guide Humans in Novel Tasks

Jason Corso is Co-Founder / CEO of the computer vision startup Voxel51 and Professor of Robotics, Electrical Engineering and Computer Science at the University of Michigan. He received his PhD and MSE degrees at The Johns Hopkins University in 2005 and 2002, respectively, and the BS Degree with honors from Loyola College In Maryland in 2000, all in Computer Science. He is the recipient of a U Michigan EECS Outstanding Achievement Award 2018, Google Faculty Research Award 2015, the Army Research Office Young Investigator Award 2010, NSF CAREER award 2009, SUNY Buffalo Young Investigator Award 2011, a member of the 2009 DARPA Computer Science Study Group, and a recipient of the Link Foundation Fellowship in Advanced Simulation and Training 2003. Corso has authored more than 150 peer-reviewed papers and hundreds of thousands of lines of open-source code on topics of his interest including computer vision, robotics, data science, and general computing. He is a member of the AAAI, ACM, MAA and a senior member of the IEEE.

Prof. CAROLINA CRUZ-NEIRA

Dr. Carolina Cruz-Neira, a member of the National Academy of Engineering, is a pioneer in the areas of virtual reality and interactive visualization, having created and deployed a variety of technologies that have become standard tools in industry, government, and academia. She is known world-wide for being the creator of the CAVE virtual reality system. She has dedicated a part of her career to transfer research results into daily use by spearheading several Open-Source initiatives, such as VRJuggler, to disseminate and grow VR technologies and by leading entrepreneurial initiatives to commercialize research results. She has over 100 publications as scientific articles, book chapters, magazine editorials and others. She has been awarded over $75 million in grants, contracts, and donations. She is also recognized for having founded and led very successful virtual reality research centers: the Virtual Reality Applications Center at Iowa State University, the Louisiana Immersive Technologies Enterprise, and the Emerging Analytics Center at the University of Arkansas at Little Rock. She serves in many international technology boards, government technology advisory committees, and outside the lab, she enjoys extrapolating her technology research with the arts and the humanities through forward-looking public performances and installations. She has been named one of the top innovators in virtual reality and one of the top three greatest women visionaries in virtual reality. BusinessWeek magazine identified her as a “rising research star” in the next generation of computer science pioneers; she has been inducted as a member of the National Academy of Engineering, a member of the IEEE Virtual Reality Academy, an IEEE Fellow, and an ACM Computer Pioneer; She has received the IEEE Virtual Reality Technical Achievement Award and the Distinguished Career Award from the International Digital Media & Arts Society among other national and international recognitions. She has given numerous keynote addresses and has been the guest of several governments to advise on how virtual reality technology can help to give industries a competitive edge leading to regional economic growth. She has appeared in numerous national and international TV shows and podcasts as an expert on her discipline and several documentaries have been produced about her life and career. Currently, Dr. Cruz is the Agere Chair in Computer Science at the University of Central Florida.

Prof. Masahiro Furukawa

Tsumori-Control: Robot Control Method Based on Control Intention

In this presentation, we will introduce one of the intuitive robot control methods, the "robot control method based on control intention a.k.a. Tsumori-Control". Among teleoperation methods, telerobotics and telexistence have realized tele-experience through real-time mutual sharing of sensation and motion. Tsumori-Control, on the other hand, focuses on the idea that humans can execute continuous motion by first memorizing discrete segments of continuous motion, then recalling them and restoring them to continuous motion again. The "Tsumori-Control" presented in this lecture introduces a robot control method that can reproduce the motion intention of the operator by focusing on the motion reproduction process when humans memorize and reproduce motion. In addition to teleoperation, the presentation will also include examples of applications of "Tsumori-Control" based on the case study of body augmentation.

Masahiro Furukawa is an Associate Professor in the Dept. of Bioinformatic Engineering at the Graduate School of Information Science and Technology, Osaka University, Japan. His research interests are focused on the development of intelligent informatics, cognitive sciences, human interfaces, and interactions based on virtual, augmented, and mixed reality. keywords are haptics, vection, telexistence, vision-haptic integration, and walking guidance. He studied the novel telexistence surrogate robot system TELESAR V, always having in mind human perception.

dr. ankur handa

Developing demonstration collection systems for dexterous hand and arm robots and exploring possibilities of smart assistive tele-op.

Design and development of data collection systems with robots is challenging. How far can we go in developing precise teleop and making them smart to relieve the burden on the user remains an open question. In this talk, we will look into how to develop a tele-op system for highly dexterous hand-arm robots using only vision based input and various challenges involved in the process. Later, we will explore the possibilities of developing future smart teleop systems to facilitate better communication of the task to the robot.

Ankur Handa is a Research Scientist at NVIDIA Robotics Lab in Seattle with Dieter Fox. Prior to that he was a Research Scientist at OpenAI. He finished his PhD with Andrew Davison at Imperial College London and post-doc at University of Cambridge with Roberto Cipolla. His papers have won Best Industry Paper Prize at BMVC 2014, Best Student Paper Finalist and Best Manipulation Paper Finalist at ICRA 2019.

Dr. Helen Oleynikova

How do robots perceive the world? How is this different from humans? How can we use mixed, augmented, and virtual reality to bridge the gaps between human and robot perception? This talk will specifically focus on map and spatial representations for robots and similarities to such systems in XR settings, discuss how we can co-localize between heterogeneous systems, and how to represent semantic information in such maps.

Helen is currently a senior software engineer on 3D Perception for Robotics at Nvidia in Zürich, Switzerland, focusing on GPU-accelerated perception for collision avoidance and teleoperation. Her previous position was on the Mixed Reality and Robotics team at Microsoft, also in Zürich, where she worked on how mixed reality could be applied for robotic interaction and teleoperation. Her PhD research at the Autonomous Systems Lab in ETH Zürich focused on mapping and path planning for 3D collision avoidance on-board Micro-Aerial Vehicles, specifically for search and rescue and inspection applications.