Title: Haptic communication between humans and with robots
Robotic systems are increasingly used to work in mechanical interaction with humans, but these contact robots have so far made little use of the opportunities of interactive control. We have found recently that mechanically connected humans benefit from the interaction force by inconspicuously identifying each other's control and improving their own performance. This presentation will present these results on human-human sensorimotor interaction and their computational modelling. It will derive robotic translation of the control principles, enabling sensory augmentation and optimally shared effort between interacting humans or/and robots through differential game theory.Title: Teleoperation through traded and shared control
Robots are powerful and are not subject to fatigue but lack the general intelligence of humans. Developing a generic framework that can combine both strengths is a long-standing challenge in robotics. In this talk I will review lessons learned from the Darpa Robotics Challenge (DRC), where the requirements of high degree of freedom and low bandwidth communication, led to the development of traded control architectures: interleaved operator task specification at mid-level of abstraction and AI driven execution. I will draw insights from my previous experience in this competition and present recent own work in the area of shared control, where the robot infers the human intent to support the user AI driven execution. Finally, I will introduce the challenges linked to operating in human populated environments and propose a way to integrate predictive models of human behavior to support safer and more effective robot behaviors.Title: Robust Real-time Human-to-Robot Motion Remapping and Shared-Control for Effective Telemanipulation
In this talk, I present on shared-control methods that afford effective mapping of human-arm motion to robot-arm motion in real-time. We posit that enabling users to work in the "natural" space of their arms will allow them to draw on their inherent kinesthetic sense and ability to perform tasks in controlling a robot. Because a direct mapping between human motion and robot motion is often infeasible due to differing geometries, scales, joint velocity limits, joint position limits, number of degrees-of-freedom, etc., we instead utilize shared-control to take in the human motion input as a guideline, while allowing the robot to subtly relax certain objectives on-the-fly in favor of maintaining motion and task constraints. I present on numerous instantiations of this shared-control paradigm, such as a dynamic camera method that continuously optimizes a viewpoint for a remote user, and a bimanual shared-control method inspired by how people naturally perform bimanual manipulations. I highlight the benefits and challenges in incorporating machine learning into these real-time shared-control policies and present general principles and results learned from our findings.Title: Toyota's Guardian Approach to Automated Driving
Title: From human-intention recognition to compliant control
Human ability to coordinate one’s actions with other individuals to perform a task together is fascinating. For example, we coordinate our action with others when we carry a heavy object or when we construct a piece of furniture. Capabilities such as (1) force/compliance adaptation, (2) intention recognition, and (3) action/motion prediction enables us to assist others and fulfill the task. For instance, by adapting the compliance, we not only reject undesirable perturbations that undermine the task but also incorporate others’ motions into the interaction. Complying with partners’ motions allows us to recognize their intention and consequently predict their actions. With the growth of factories involving humans and robots working side by side, designing controllers and algorithms with such capacities is a crucial step toward assistive robotics. The challenge, however, is to attain a unified control strategy with predictive and adaptive capacities at the task, motion, and force-level which ensures a stable and safe interaction. In this talk, we present a state-dependent dynamical system-based approach for prediction and control in physical human-robot interactions.Title: Intent inference with more flexible assumptions
Title: Intentional stance for social attunement in HRI
In daily lives, we need to be able to efficiently navigate through our social environment. Our brain has developed a plethora of mechanisms that allow smooth social interactions with others, and that enable understanding of others’ behaviors, and prediction of what others are going to do next. At the dawn of a new era, in which robots might soon be among us at homes and offices, one needs to ask whether (or when) our brain uses similar mechanisms towards robots. In our research, we examine what factors in human-robot interaction lead to activation of mechanisms of social cognition and interpreting the intentionality in interaction partners. We use methods of cognitive neuroscience and experimental psychology in naturalistic protocols in which humans interact with the humanoid robot iCub. Here, I will present results of several experiments in which we examined the impact of various parameters of robot social behavior on the mechanisms of social cognition. We examined whether mutual gaze, gaze-contingent robot behavior, or human-likeness of movements influence social attunement. Our results show an interesting interaction between more “social” aspects of robot behavior and fundamental processes of human cognition. The results will be discussed in the context of several general questions that need to be addressed: the societal impact of robots towards whom we attune socially or clinical applications of social robots.Title: Learning Interaction Primitives for Human-Robot Collaboration and Symbiosis
In this talk, I will present a methodology for learning physical human-robot interaction from demonstrations. The result of this learning process is a compact representation, called "Interaction Primitive", which models the spatio-temporal relationship between multiple agents. Interaction Primitives can be used in human-robot collaboration and shared control tasks for both action recognition, as well as action generation. Most importantly, they generate probabilistic beliefs over key information that is needed for safe and fast-paced physical interaction. I will present extensions of this approach that address multimodal datasets and complex, non-linear inference schemes. Finally, I will also discuss a number of real-world applications, including intelligent prosthetics, collaborative robot manipulation, as well as throwing-and-catching games.Title: Alternative and Extensions to Intent Inference