Workshop Papers

Session 1: Interpretability & Shared Autonomy

Assistive Handheld Robots: Remote Collaboration and Anticipation of User Intention [PDF]

Janis Stolzenwald and Walterio W. Mayol-CuevasUniversity of Bristol

Abstract:

Handheld robots share the shape and properties of handheld tools while being able to process task information and aid manipulation. In our recent work we explore intention inference for user actions. The model derives intention from the combined information about the user’s gaze pattern and task knowledge. Based on a generic pick and place task, the intention model yields real-time prediction capabilities and reliable accuracy up to 1.5 s prior to actions. Furthermore, we evaluate a system that allows a remote user to assist a local user through diagnosis, guidance and physical interaction as a novel aspect, with the handheld robot providing assistive motion. We show that the handheld robot can mediate the helper’s instructions and remote object interactions while the robot’s semi-autonomous features improve task performance by 37% and reduces demands for verbal communication.

A Bipolar Myoelectric Sensor-Enabled Human-Machine Interface Based On Spinal Module Activations [PDF]

Xunfeng Yin, Chunzhi Yi, Feng Jiang, and Chifu YangHarbin Institute of Technology

Abstract:

The surface electromyography (sEMG) signal-based human-machine interface has great potential to explore the neuromuscular information of movements and muscle strength for various scenarios of physical human-robot interaction. However, current human-machine interfaces based on bipolar myoelectric sensors are hindered by the limitations of global sEMG features, which are prone to variability and delay. In this paper, we define and experimentally test a human-machine interface that takes advantage of the spinal module activations, i.e. co-discharging activities of spinal interneurons. The spinal module activation is identified by the spiking trains of the muscle synergies extracted from sEMG signals. We extract information encoded in both firing rates and spike timings of the spinal module activation in a populational coding manner, which follows the information encoding principle of neurons. We initially apply our extracted features on gait phase classification and demonstrate their superiority.

Collaborative Human-Robot Exploration in Marine Environments [PDF]

Juan Camilo*, Gamboa Higuera*, Travis Manderson*, Karim Koreitem*, Florian Shkurti** and Gregory Dudek*McGill University*, University of Toronto**

Abstract:

We consider the task of collaborative human-robot underwater exploration for the purpose of collecting scientifically-relevant video data for environmental monitoring. We present a learned visual navigation and search method that balances the efficiency of autonomous visual navigation with the data collection preferences of marine biologists, encoded in a visual saliency model. These models are trained interactively, with weak supervision from scientists. Our visual navigation model combines behaviours learned via imitation learning, encoding the goals of a domain expert (i.e. collecting images of live coral) while avoiding obstacles, with a goal-conditioned navigation policy, trained via hindsight relabelling of prior trajectory data. Our visual saliency model enables informed visual navigation, without a previously known map. It uses a conditional visual similarity operator to guide the robot to capture images similar to an exemplar selected by a domain expert, as a high-level specification of the robot’s mission. Our field deployments have demonstrated over a kilometer of autonomous relevant data collection by our underwater robot, while interacting with human scientists both offline in the lab and in-situ. We present underwater human-robot interaction issues pertaining to the best ways for the robot to bring an interesting location to the human diver’s attention, while both of them are exploring an area of interest independently.

Session 2: Decoding Intent

Feature Expansive Reward Learning: Rethinking Human Input [PDF]

Andreea Bobu, Marius Wiggert, Claire Tomlin and Anca D. DraganUC Berkeley

Abstract:

In collaborative human-robot scenarios, when a person is not satisfied with how a robot performs a task, they can intervene to correct it. Reward learning methods enable the robot to adapt its reward function online based on such human input, but they rely on simple functions of handcrafted features. When the human correction cannot be explained by these features, recent progress in deep Inverse Reinforcement Learning (IRL) suggests that the robot could fall back on demonstrations: ask the human for demonstrations of the task, and recover a reward defined over not just the known features, but also the raw state space. Our insight is that rather than implicitly learning about the missing feature(s) from task demonstrations, the robot should instead ask for data that explicitly teaches it about what it is missing. We introduce a new type of human input, in which the person guides the robot from areas of the state space where the feature she is teaching is highly expressed to states where it is not. We propose an algorithm for learning the feature from the raw state space and integrating it into the reward function. By focusing the human input on the missing feature, our method decreases sample complexity and improves generalization of the learned reward over the above deep IRL baseline. We show this in experiments with a 7DoF robot manipulator.

Leveraging knowledge asymmetries to evaluate synthesized gesture based communication in human-robot interaction [PDF]

Nick DePalma* and Jessica Hodgins**Facebook AI Research*, Carnegie Mellon University**

Abstract:

There has been a considerable effort to help robots learn new tasks from humans. While this effort has led to very promising results, it typically assumes the human partner to be an expert in the task. This social dynamic is a knowledge asymmetry in that some action policy or set of goals is being communicated through an environmental demonstration or using linguistic structure to impart the plan of action to the robot. However, more recent work in human-robot interaction has also been investigating the opposite dynamic: the robot understands a policy or teaching curriculum enough to impart this information back to a child or adult who is unfamiliar with the given task. Current psychological models of communication typically focus on understanding how this dynamic is relaxed over time to create a more peer-to-peer centered interaction, or in other words, one in which both partners are equally capable. Our focus in this paper is to better understand how humans and robots communicate through nonverbal cues. Gesture is a particularly important part of nonverbal communication that is highly structured and utilizes symbolic motions that can communicate ideas through imagery or spatial referencing. In this paper, we present the idea of using gesture to com- municate plans and discuss our early work in translating a navigational path plans to a gesture sequence. We argue that these path plans could one day be interpretable enough for humans to understand and utilize toward achieving their goals.

Understanding Intentions in Human Teaching to Design Interactive Task Learning Robots [PDF]

Preeti Ramaraj, Matt Klenk and Shiwali MohanPalo Alto Research Center

Abstract:

The goal of Interactive Task Learning (ITL) is to build robots that can be trained in new tasks by human instructors. In this paper, we approach the ITL research problem from a human instructor perspective. The research question that we address here is how do we understand and leverage the intentionality of the instructors to enable natural and flexible ITL. We propose a taxonomy based on Collaborative Discourse Theory that organizes human teaching intentions in a human robot teaching interaction. This taxonomy will provide guidance for ITL robot design that leverages a human’s natural teaching skills, and reduces the cognitive burden of non-expert instructors. We propose human participant studies to validate this taxonomy and gain a comprehensive understanding of teaching interactions in ITL.