Speakers

Prof. MARU CABRERA

Leveraging XR and Human-centered Approaches in Assistive Robotics

In this talk, I aim to cover the general topic of interaction methods using human expression and context, and their potential applications in assistive robotics; the two domains I will elaborate are surgical applications and service robots in human environments. I will present some of my work on surgical telementoring using augmented reality, as well as assistive robotic platforms and applications with different levels of autonomy considering both the users and the tasks at hand.  I will showcase assistive technologies that leverage human context to adjust the way a robot executes a task. I will also address how this line of research contributes to the HRI field in general, and the broader goals of the AI community.

Maru Cabrera is an Assistant Professor in the Rich Miner School of Computer and Information Sciences at UMass Lowell. Before that, she was a postdoctoral researcher at the University of Washington working with Maya Cakmak in the Human-Centered Robotics Lab. She received her PhD from Purdue University advised by Juan P. Wachs. Her research interests aim to develop robotic systems that work alongside humans, collaborating in tasks performed in home environments; these systems explore different levels of robot autonomy and multiple ways for human interaction in less structured environments, with an emphasis on inclusive design to assist people with disabilities or older adults aging in place.; this approach draws from an interdisciplinary intersection between robotics, artificial intelligence, machine learning, computer vision, extended reality, assistive technologies and human-centered design.

Mr. CARLOS VALLESPI-GONZALEZ

Nearing Seamless Telepresence through Photorealistic Avatars

Our exploration begins with a historical journey through communication technology, spanning from the inception of postal services to modern videoconferencing. Despite the transformative impact of these technologies on communication, they still fall short of replicating the depth of face-to-face interactions. In our research lab, we are actively engaged in the development of the next frontier in communication technology: a live telepresence system. This presentation will highlight the essential components enabling telepresence, underpinned by recent advancements in computer graphics, machine learning, and neural rendering. Our ultimate goal is to enable an immersive communication experience that is virtually indistinguishable from reality.

Carlos Vallespi-Gonzalez is a Research Engineering Director at Meta Reality Labs, USA. Currently, he is supporting research engineering and machine learning infrastructure for telepresence applications in VR/AR. Previously, he worked over 7 years in self-driving Uber ATG and Aurora companies, leading the development of perception and prediction algorithms deployed in the self-driving cars. Carlos gathered 10 years of experience at the National Robotics Engineering Center working in the automation of agricultural machinery, including the development and deployment of fully autonomous tractors in orange orchards in Florida. He graduated with honors from La Salle School of Engineering with a degree in Software Engineering and received a M.Sc. in Robotics from Carnegie Mellon. He has authored and co-authored over 30 patents as well as 20+ publications in major Computer Vision conferences. His research interests are in the fields of Machine Learning and Computer Vision.

Prof. HADAS EREL

XR as a Tool for Designing the Appearance and Behavior of Social Robots

The design of robots involves a substantial commitment of time and resources, especially when considering non-humanoid robots, which allow for greater design flexibility. When appropriately designed, these robots can take various forms and morphologies while maintaining consistent social cues that facilitate communication with them. In this talk, we will discuss the potential utilization of Extended Reality (XR) as a tool for exploring different robotic morphologies and the associated social interpretations of their behavior.

Hadas Erel is an Assistant Professor at the Media Innovation Lab, Reichman University, Israel. She is the head of the Social HRI research and leads the behavioral research in the lab. Her research focus is social interaction with robotic objects (non-humanoid robots) and its impact on human-human interaction. Hadas is interested in understanding how robots’ design and interaction characteristics enhance (or compromise) humans’ wellbeing, and how human-robot interaction influences humans’ emotions and needs. Most recently, Hadas is interested in human interaction with more than one robot. Hadas holds a PhD (Summa Cum Laude) from Ben-Gurion University of the Negev in Cognitive Psychology.

Prof. JEE-HWAN RYU

XR-based Interactive Frameworks for Designing Shared Telemanipulation

In this presentation, I will introduce two user-interactive XR-based frameworks designed for shared telemanipulation. The first example is a framework for interactive Virtual Fixture (VF) design. Unlike traditional VF-based telemanipulation, we do not assume that the VF is predefined or given. Instead, we propose an interactive approach to designing a VF for shared telemanipulation, and I will explain how we have developed an XR interface for this purpose. The second example is a user-centric path planning framework. We have introduced a method that combines the operator's intuition with the planning capabilities of algorithms. We have also created a sketch-based user interface to capture the operator's intentions and intuition, using them to adjust the planner's tuning parameters for more efficient search. I will demonstrate the benefits and advantages of these methods in comparison to conventional teleoperation techniques. 

Jee-Hwan Ryu (Senior Member, IEEE) is currently a Full Professor with the Department of Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology (KAIST), S. Korea. He received the B.S. degree from Inha University, Incheon, South Korea, in 1995, and the M.S. and Ph.D. degrees from the Korea Advanced Institute of Science and Technology, Daejon, South Korea, in 1997 and 2002, respectively, all in mechanical engineering. His research interests include haptics, telerobotics, exoskeletons, soft robots, and autonomous vehicles.

Prof. CYNTHIA MATUSZEK

Using Virtual Reality to Learn Language-Based HRI

As robots move from labs and factories into human-centric spaces, it becomes progressively harder to predetermine the environments and interactions they will need to handle. Although letting robots learn from end users via natural language is an intuitive, versatile approach to handling dynamic environments and novel situations, human-robot experiments and the acquisition of linguistic training data are expensive propositions requiring significant effort. Virtual reality (VR) offers a growing opportunity for human-robot interaction in a sim-to-real context, permitting new ways for people to interact with robots in a safe, low-cost environment, and supporting the development and testing of machine learning-based models of interaction that can be carried over into physical reality. However, it is important to be aware of shortcomings that VR-based simulations may have as a reliable surrogate for humans, robots, and the world. In this presentation, I will give an overview of our work on learning the grounded semantics of natural language describing an agent's environment, and will describe work on applying those models in a sim-to-real language learning context.

Cynthia Matuszek is an associate professor of computer science and electrical engineering at the University of Maryland, Baltimore County, and the director and founder of UMBC’s Interactive Robotics and Language lab. She holds a Ph.D. from the University of Washington. Her research is focused on how robots can learn grounded language from interactions with non-specialists, which includes work in not only robotics, but human-robot interactions, natural language, and machine learning, informed by a background in common-sense reasoning and classical artificial intelligence. Dr. Matuszek has published in machine learning, artificial intelligence, robotics, and human-robot interaction venues.

Dr. JUAN NIETO

Empowering People and Robots in a Mixed Reality World

Juan Nieto is a Principal Research Scientist in the Mixed Reality and AI lab at Microsoft Zurich, where he manages a team of engineers and researchers working on various aspects of Spatial Computing. He received his bachelor's degree in Electrical Engineering from Universidad Nacional del Sur and then obtained a Ph.D. in robotics from the University of Sydney (2006). After completing his Ph.D., he worked at the University of Sydney as a Research Fellow on diverse robotic projects applied to various domains such as mining, autonomous cars, and agriculture.

 

In 2015, he moved to ETH Zurich as Deputy Director in the Autonomous Systems Lab until 2020 when he joined Microsoft. His research interests encompass sensing, perception, and decision-making, currently with a focus on cloud mapping and localization. He is the recipient of the Best Paper Award at IEEE SSRR 2017, Best Cognitive Paper Finalist at IEEE IROS 2019, Best Paper Award IEEE RAS Magazine 2020, and Best Paper Award Finalist IEEE TRO 2021. He was also the recipient of the Amazon Research Award and Google Research Award during his work at ETH. He has also participated in robotic challenges, he led the ETH Team at MBZIRC 2017 obtaining silver medal, and was part of the mapping team at the DARPA Subterranean Challenge 2021 with the winning team (team Cerberus). He regularly participates on conference and workshops panels as an organizer and reviewer.

Prof. GIUSEPPE LOIANNO

Exploring Human-Drone Collaboration in Mixed Reality: 

Enhancing Spatial Awareness and Interactive Navigation


The growing presence and integration of aerial robots in various activities, such as inspection, search and rescue missions, and monitoring, have generated a demand for cutting-edge interfaces and tele-immersive solutions. These solutions aim to facilitate human-robot interactions, particularly in situations where they can relieve humans of physical and cognitive burdens. They are particularly valuable in scenarios involving complex and hazardous tasks, including operations conducted beyond the line of sight.

In this presentation, I will introduce an innovative tele-immersive framework designed to foster cognitive and physical collaboration between humans and robots, harnessing the power of Mixed Reality (MR). This framework incorporates a distinctive bidirectional spatial awareness mechanism and employs a multi-modal virtual-physical interaction approach. Our proposed framework transcends the traditional command and control or teleoperation paradigm, enabling safe, intuitive, immersive, collaborative, and interactive human-drone navigation.

Giuseppe Loianno is an assistant professor at the New York University, New York, USA, and director of the Agile Robotics and Perception Lab (https://wp.nyu.edu/arpl/) working on autonomous robots. Prior to NYU he was a lecturer, research scientist, and team leader at the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania. Dr. Loianno has published more than 70 conference papers, journal papers, and book chapters. His research interests include perception, learning, and control for autonomous robots. He received the recipient of the NSF CAREER Award in 2022, DARPA Young Faculty Award in 2022, IROS Toshio Fukuda Young Professional Award in 2022, Conference Editorial Board Best Associate Editor Award at ICRA 2022, and Best Reviewer Award at ICRA 2016, and he was selected as Rising Star in AI from KAUST in 2023. He is also currently the co-chair of the IEEE RAS Technical Committee on Aerial Robotics and Unmanned Aerial Vehicles and juror in several worldwide robotics competitions. He was the program chair of the 2019 and 2020 IEEE International Symposium on Safety, Security and Rescue Robotics and the general chair in 2021. His work has been featured in a large number of renowned international news and magazines.

PROF. YASUYUKI INOUE

XR and Robotics for Virtual Embodiment

Both XR and robotic technologies allow us to extend our bodily functions such as having and controlling supernumerary body-parts (e.g. third and forth arms) in the context of human augmentation. These experiences lead change in our bodily consciousness (i.e. virtual embodiment) regardless of whether the artificial body is either real or virtual. The compatibility of XR and robotic augmentation has large potential to investigate cognitive mechanism and its underlying neural basis regarding embodiment. In this talk, I will introduce recent cognitive and psychological researches using XR and robotics regarding virtual embodiment.

Yasuyuki Inoue is a Project Assistant Professor of Computer Science and Engineering, at the Toyohashi University of Technology, Japan. He graduated from the Graduate School of Engineering, Toyohashi University of Technology in 2010 (Doctor of Engineering) under the supervision of Prof. Michiteru Kitazaki. He worked as Project Assistant Professor at The University of Electro-Communications, Postdoctral Fellow at Mie University, and Postdoctral Fellow at The University of Tokyo.