Invited speakers

X. Jessie Yang (University of Michigan)

Title: A Workload Adaptive Haptic Shared Control Scheme for Semi-Autonomous Driving

Abstract: Haptic shared control is used to manage the control authority allocation between a human and an autonomous agent in semi-autonomous driving. Existing haptic shared control schemes, however, do not take full consideration of the human agent. To fill this research gap, this study presents a haptic shared control scheme that adapts to a human operator’s workload, eyes on road, and input torque in real-time. We conducted human-in-the-loop experiments with 24 participants. In the experiment, a human operator and an autonomy module for navigation shared the control of a simulated notional High Mobility Multipurpose Wheeled Vehicle (HMMWV) at a fixed speed. At the same time, the human operator performed a target detection task for surveillance. The autonomy could be either adaptive or non-adaptive to the above-mentioned human factors. Results indicate that the adaptive haptic control scheme resulted in significantly lower workload, higher trust in autonomy, better driving task performance and smaller control effort.

Luka Peternel (Delft University of Technology)

Title: Mutual adaptation in human-robot co-manipulation for ergonomic collaboration

Abstract: The talk will present several robot control and learning methods for co-manipulation with humans, where the focus is on mutual adaptation. A particular attention is given to adaptation to ergonomics and learning from the human co-worker. The adaptation process incorporates machine learning, biomechanical models, knowledge of human motor control, and real-time measurements to track and improve various metrics, such as: task performance, human muscle fatigue, joint torques and arm manipulability. In the first part, the focus is on the application to co-manipulation during physical human-robot collaboration in various practical tasks (e.g., collaborative sawing, polishing, valve turning, assembly, etc.). The second part will focus on the application in exoskeletons, where co-manipulation is done while human and robot limbs are physically coupled. Finally, the last part will examine the application in teleoperation, where co-manipulation pertains to the remote robot being commanded by a human operator. In particular, we will take a look at an analysis of impedance-command interfaces in force-feedback tele-impedance.

Slides

Laurel Riek (University of California, San Diego)

Title: Proximate Human Robot Teaming: Fluent and Trustworthy Interaction

Abstract: When engaging in human-robot teaming (HRT) in dynamic, uncertain environments, it is crucial that there is mutual understanding and well-calibrated trust between humans and machines. This talk will discuss recent work from my lab which explores how robots that can sense, understand, and make decisions under uncertainty to support HRT in critical environments, in ways that afford trust and transparency. I will also discuss our recent efforts applying this basic research to building and deploying new shared autonomy systems in emergency medicine, to support improved teaming and safety during the pandemic.

Elizabeth K Phillips (George Mason University)

Title: Leveraging Virtual Reality Interfaces for Shared Autonomy in Human-Robot Interaction

Abstract: Whether exploring a defunct nuclear reactor, defusing a bomb, delivering medicine to quarantined patients, or repairing the International Space Station from the outside, robots can be in places where humans cannot go, can augment the capabilities of humans, and improve quality of life and work. Since even the most advanced robots have difficulty completing tasks that require grasping and manipulation, human teleoperation is often a practical alternative for these types of tasks. By importing the dexterity, expertise, and wealth of background knowledge of a human operator, robots can leverage the skills of their human teammates without requiring humans to be physically present. However, existing robot teleoperation interfaces can be improved by shared autonomy paradigms which allow for humans and robots to dynamically complete (sub)tasks with more or less intervention from automation. Virtual reality interfaces are suitable alternatives for resolving problems with traditional robot teleoperation with static or little introduced autonomy. In this talk, I will speak about the spectrum between automation and autonomy and how virtual reality can be an effective and usable interface to leverage input from humans and create shared autonomy paradigms for human-robot interaction.

Video

Cristina Olaverri Monreal (Johannes Kepler University Linz)

Title: Interaction Human-Vehicular Robots: the role of trust

Abstract: The robot capabilities that also share the highest levels of automation in vehicles, such as sensing the environment, analyzing information to make decisions, and performing actions in the roads require mastery of many challenges, including the detection of other road users and the monitoring of driver/passenger behavior. This is particularly important if the system requests that the vehicle is manually operated in a certain moment. In this case, a safe transition process can be guaranteed through cooperative systems that guide the driver through the maneuver by, for example, applying a countersteering force. In this context, trust in the system plays a crucial role, not only from the side of the driverbut also from the side of other road users interacting with automated vehicles. This presentation gives an overview of the impact of automated technologies on traffic safety addressing ways to increase trust in the robotic system.

Ayse Kucukyilmaz (University of Nottingham)

Title: Role Allocation and Variable Autonomy in Shared Control: Challenges in Human-Robot Collaborative Teamwork

Abstract: Shared control naturally occurs in human-human collaboration, where the continuous and prolonged nature of interaction characterizes an allocation of roles within the team. Unfortunately, such roles are seldom implemented in human-robot collaboration. In this talk, I will present our research on haptic shared control, and discuss dynamic role allocation and variable autonomy techniques to enable human-robot teamwork in close physical contact. I will elaborate on if and how the use of haptics will enhance the communication and interaction capabilities in collaborative robots. My talk will end with a short discussion of future research directions, such as aspects of trust in human-robot interaction.

Video

Brenna Argall (Northwestern University)

Title: Judicious and Interface-Aware Shared Autonomy

Abstract: As need increases, access decreases. It is a paradox that as human motor impairments become more severe, and increasing assistance needs are paired with decreasing motor abilities, the very machines created to provide this assistance become less and less accessible to operate with independence. My lab addresses this paradox by incorporating robotics autonomy and intelligence into physically-assistive machines: leveraging robotics autonomy, to advance human autonomy. Achieving the correct allocation of control between the human and the autonomy is essential, and critical for adoption. The allocation must be responsive to individual abilities and preferences, that moreover can be changing over time, and robust to human-machine information flow that is filtered and masked by motor impairment and control interface. As we see time and again in our work and within the field: customization and adaptation are key, and so the opportunities for machine learning are clear. This talk will overview a sampling of ongoing projects and studies in my lab, with a focus on shared autonomy that is judicious, adaptive, and interface-aware.


Dorsa Sadigh (Stanford University)

Title: Learning from Interactions for Assistive Robotics

Abstract: In this talk I will be discussing a framework for intuitive assistive teleoperation of high degree freedom robots. I will first introduce the idea of latent actions as an approach for providing a low dimensional and intuitive control interface for assistive teleoperation. This framework can easily be integrated within the shared autonomy framework to enable precise manipulation of objects. Building upon this framework, I will discuss how we can learn a natural language interface to provide language instructions during shared autonomy. I will introduce LILA: language-informed latent actions which enables learning from language instructions while providing a low-dimensional teleoperation interface for the user. Building upon LILA, I will describe how this approach can be extended to settings where we not only learn from language instructions but can also leverage online language correction. I will close the talk by discussing some of the challenges and open problems in the space of shared autonomy including challenges in assistive feeding.

Slides

Andreas Kolling (Amazon Robotics)

Title: Human-Robot Interaction with Large-Scale Autonomous Systems in Industry


Abstract: Amazon is a pioneer in robotics and has built the largest worldwide fleet of robots, with more than a quarter million robots in continuous operation. Autonomous systems have become a reality in our facilities around the globe. Scientists and engineers continue to work on scaling our systems and adding new types of robots. With that new reality come a host of questions around the purpose of autonomy and how to safely and efficiently design robots that interact with millions of people. We will showcase some of the systems we built and discuss the context in which our robots operate. From there we will explore how these connect to existing research questions and how they might inspire new ones.

David Hsu (National University of Singapore)

Title: Human-Robot Interaction as a POMDP

Abstract: Rich interactions occur between robots and humans: assistance, adaptation, demonstration, collaboration, teaming, ... Are they all different problems as they appear? Or can we solve them all as one single problem? In this talk, I will discuss our recent attempt to formalize these rich, disparate interactions in a unified decision-theoretic framework based on the partially observable Markov decision process (POMDP). It sheds light on a latent target policy of joint robot and human actions as the core common issue. What do the robot and the human know about this joint policy? How does it change over time? Successful interaction requires the robot and the human to resolve the uncertainty, i.e., the lack of information over this latent joint policy. The discussion, I hope, will spur greater interest in principled approaches that connect decision making uncertainty and fluid human-robot collaboration.

Slides

Jim Mainprice (Stuttgart University)

Title: Shared Control for Remote Operations

Abstract: Robots are powerful and are not subject to fatigue but lack the general intelligence of humans. Developing a generic framework that can combine both strengths is a long-standing challenge in robotics. In this talk I will review lessons learned from the Darpa Robotics Challenge (DRC), where the requirements of high degree of freedom and low bandwidth communication, led to the development of traded control architectures: interleaved operator task specification at mid-level of abstraction and AI driven execution. I will draw insights from my previous experience in this competition and present two of our recent works in the area of shared control, where the robot infers the human intent to support the user AI driven execution. In both cases, the focus is to increase control authority by either reformulating the problem or learning online how to tune the arbitration function.

Video