Detailed Schedule

08:50-09:00AM : Welcome

Session 1 | Sharing Control

09:00-09:30AM: Etienne Burdet

Imperial College of London

Title: Haptic communication between humans and with robots

Robotic systems are increasingly used to work in mechanical interaction with humans, but these contact robots have so far made little use of the opportunities of interactive control. We have found recently that mechanically connected humans benefit from the interaction force by inconspicuously identifying each other's control and improving their own performance. This presentation will present these results on human-human sensorimotor interaction and their computational modelling. It will derive robotic translation of the control principles, enabling sensory augmentation and optimally shared effort between interacting humans or/and robots through differential game theory.

09:30-10:00AM: Jim Mainprice

University of Stuttgart

Title: Teleoperation through traded and shared control

Robots are powerful and are not subject to fatigue but lack the general intelligence of humans. Developing a generic framework that can combine both strengths is a long-standing challenge in robotics. In this talk I will review lessons learned from the Darpa Robotics Challenge (DRC), where the requirements of high degree of freedom and low bandwidth communication, led to the development of traded control architectures: interleaved operator task specification at mid-level of abstraction and AI driven execution. I will draw insights from my previous experience in this competition and present recent own work in the area of shared control, where the robot infers the human intent to support the user AI driven execution. Finally, I will introduce the challenges linked to operating in human populated environments and propose a way to integrate predictive models of human behavior to support safer and more effective robot behaviors.

10:00-10:30AM: Michael Gleicher / Daniel Rakita

University of Wisconsin

Title: Robust Real-time Human-to-Robot Motion Remapping and Shared-Control for Effective Telemanipulation

In this talk, I present on shared-control methods that afford effective mapping of human-arm motion to robot-arm motion in real-time. We posit that enabling users to work in the "natural" space of their arms will allow them to draw on their inherent kinesthetic sense and ability to perform tasks in controlling a robot. Because a direct mapping between human motion and robot motion is often infeasible due to differing geometries, scales, joint velocity limits, joint position limits, number of degrees-of-freedom, etc., we instead utilize shared-control to take in the human motion input as a guideline, while allowing the robot to subtly relax certain objectives on-the-fly in favor of maintaining motion and task constraints. I present on numerous instantiations of this shared-control paradigm, such as a dynamic camera method that continuously optimizes a viewpoint for a remote user, and a bimanual shared-control method inspired by how people naturally perform bimanual manipulations. I highlight the benefits and challenges in incorporating machine learning into these real-time shared-control policies and present general principles and results learned from our findings.

10:30-10:45AM: Coffee Break

10:45-11:15AM: Brandon Northcutt

Toyota Research Institute

Title: Toyota's Guardian Approach to Automated Driving

11:15-11:45AM: Panel Discussion - Sharing Control

11:45-12:15PM: Poster Session 1

12:15-01:00PM: Lunch

01:00-01:30PM: Contributed Talks

1. Discrete N-dimensional Entropy of Behavior: DNDEB | Michael Young, Mahdieh Javaremi, Brenna Argall 2. Towards Integrated Joint Action Inspired Prediction Models | Christopher Fourie, Przemyslaw Lasota, Julie Shah 3. Motion Prediction with Recurrent Neural Network Dynamical Models and Trajectory Optimization | Phillipp Kratzer, Marc Toussaint, Jim Mainprice 4. Towards an Interactive Docent: Estimating Museum Visitors’ Comfort Level with Art | Ruikon Luo, Sabrina Benge, Natalie Vasher, Grace VanderVliet, Maani Ghaffari, Jessie Yang 5. Learning Arbitration for Shared Autonomy by Hindsight Data Aggregation | Yoojin Oh, Marc Toussaint, Jim Mainprice 6. Fluent Coordination in Proximate Human Robot Teaming | Sachiko Matsumoto and Laurel D. Riek 7. Automation and Information Maximization for Biomechanics-based Diagnostics and Rehabilitation | Rebecca Abbott and Todd Murphey 8. Learning to Understand Non-Categorical Physical Language for Human Robot Interactions | Luke Richards and Cynthia Matuszek

Session 2: Inferring Intent

01:30-02:00PM: Aude Billard / Mahdi Khoramshahi

EPFL

Title: From human-intention recognition to compliant control

Human ability to coordinate one’s actions with other individuals to perform a task together is fascinating. For example, we coordinate our action with others when we carry a heavy object or when we construct a piece of furniture. Capabilities such as (1) force/compliance adaptation, (2) intention recognition, and (3) action/motion prediction enables us to assist others and fulfill the task. For instance, by adapting the compliance, we not only reject undesirable perturbations that undermine the task but also incorporate others’ motions into the interaction. Complying with partners’ motions allows us to recognize their intention and consequently predict their actions. With the growth of factories involving humans and robots working side by side, designing controllers and algorithms with such capacities is a crucial step toward assistive robotics. The challenge, however, is to attain a unified control strategy with predictive and adaptive capacities at the task, motion, and force-level which ensures a stable and safe interaction. In this talk, we present a state-dependent dynamical system-based approach for prediction and control in physical human-robot interactions.

02:00-02:30PM: Anca Dragan

UC Berkeley

Title: Intent inference with more flexible assumptions

02:30-02:45PM: Coffee Break

02:45-03:15PM: Agnieszka Wykowska

Istituto Italiano di Tecnologia

Title: Intentional stance for social attunement in HRI

In daily lives, we need to be able to efficiently navigate through our social environment. Our brain has developed a plethora of mechanisms that allow smooth social interactions with others, and that enable understanding of others’ behaviors, and prediction of what others are going to do next. At the dawn of a new era, in which robots might soon be among us at homes and offices, one needs to ask whether (or when) our brain uses similar mechanisms towards robots. In our research, we examine what factors in human-robot interaction lead to activation of mechanisms of social cognition and interpreting the intentionality in interaction partners. We use methods of cognitive neuroscience and experimental psychology in naturalistic protocols in which humans interact with the humanoid robot iCub. Here, I will present results of several experiments in which we examined the impact of various parameters of robot social behavior on the mechanisms of social cognition. We examined whether mutual gaze, gaze-contingent robot behavior, or human-likeness of movements influence social attunement. Our results show an interesting interaction between more “social” aspects of robot behavior and fundamental processes of human cognition. The results will be discussed in the context of several general questions that need to be addressed: the societal impact of robots towards whom we attune socially or clinical applications of social robots.

03:15-03:45PM: Heni Ben Amor / Joe Campbell

Arizona State University

Title: Learning Interaction Primitives for Human-Robot Collaboration and Symbiosis

In this talk, I will present a methodology for learning physical human-robot interaction from demonstrations. The result of this learning process is a compact representation, called "Interaction Primitive", which models the spatio-temporal relationship between multiple agents. Interaction Primitives can be used in human-robot collaboration and shared control tasks for both action recognition, as well as action generation. Most importantly, they generate probabilistic beliefs over key information that is needed for safe and fast-paced physical interaction. I will present extensions of this approach that address multimodal datasets and complex, non-linear inference schemes. Finally, I will also discuss a number of real-world applications, including intelligent prosthetics, collaborative robot manipulation, as well as throwing-and-catching games.

03:45-04:15PM: Brenna Argall

Northwestern University and the Shirley Ryan Ability Lab

Title: Alternative and Extensions to Intent Inference

04:15-04:45PM: Panel Discussion - Inferring Intent

04:45-05:00PM: Closing Discussion and Wrap-Up

05:00-05:30PM: Poster Session 2