Towards Collaborative Partners: Design, Shared Control, and Robot Learning for Physical Human-Robot Interaction
ICRA 2024 Workshop
13 May 2024, 9:00 AM - 5:00 PM
Conference Center, Room 411-412
Online Participation: https://us06web.zoom.us/j/81089906127?pwd=83hivVpSnmnFSVp71pwbEHTuNzj6qK.1
description
As robots increasingly become part of our daily lives, the key challenge is to ensure that they are safe, responsive, and constantly improving while physically interacting with us on the short or the long term. To address this, our workshop discusses three main questions. First, how can we harness innovations in robot design to make robots inherently safe for physical human-robot interaction (pHRI)? This involves the exploration of technologies like soft robotics and haptics to ensure security and comfort in close human contact. Second, how can robots instantly and appropriately react during real-time human interactions? This can be achieved by integrating shared control strategies to make robots adjust their actions to ensure seamless collaboration. Third, how can robots learn from every interaction to be better partners and earn our trust? This encompasses robot learning algorithms and considers how human modeling and feedback can improve these robot partners over time.
The main objectives of this workshop are:
1) To investigate how each of these areas contributes to the improvement of pHRI.
2) To discuss strategies that effectively combine these approaches for a holistic progress in the field.
speakers
Abstract: Hugs are one of the first forms of contact and affection humans experience. Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have detrimental effects on an individual's well-being. However, hugs are complex affective interactions that are easy to get wrong because they need to adapt to the approach, height, body shape, and preferences of the hugging partner, and they often include intra-hug gestures like squeezes. We created HuggieBot, a hugging robot, to better understand the intricacies of close social-physical human-robot interaction and as a stepping stone to providing emotional support. Through the iterative design process of creating HuggieBot, we developed 11 tenets of robotic hugging, which ensure a robot can provide its partner with a high-quality embrace. These guidelines can be abstracted to other pHRI applications to enable new possibilities.
Abstract: Haptic devices typically rely on rigid actuators and bulky power supply systems, limiting wearability. Soft materials improve comfort, but careful distribution of stiffness is required to ground actuation forces and enable load transfer to the skin. We present an approach in which soft, wearable, knit textiles with embedded pneumatic actuators enable programmable haptic display. By integrating pneumatic actuators within high- and low-stiffness machine-knit layers, each actuator can transmit 40 N in force with a bandwidth of 14.5 Hz. We demonstrate the concept with an adjustable sleeve for the forearm coupled to an untethered pneumatic control system that conveys a diverse array of social touch signals. We assessed the sleeve’s performance for discriminative and affective touch in a three-part user study and compared our results to those of prior electromagnetically actuated approaches. Our sleeve improves touch localization compared to vibrotactile stimulation and communicates social touch cues with fewer actuators than pneumatic textiles that do not invoke distributed stiffness. The sleeve resulted in similar recognition of social touch gestures compared to a voice-coil array, while in a more portable and comfortable form factor. We propose that this approach will enable more capable, information-rich physical human-robot interaction.
Permanent ResearcherNational Institute of Advanced Industrial Science and Technology (AIST)
Abstract: Current control strategies for human-robot interaction often lack consideration of sustained physical contact, where not only the duration of the contact is long but where the interaction forces and their locations can vary substantially throughout the interaction. This presentation highlights the inherent challenges for ensuring safety guarantees and the solutions that we needed to provide this problem. Additionally, we will raise questions on how to produce communication through interaction forces. To explore this question, we propose to revisit minimum jerk trajectories, known for their motion legibility, and investigate their applicability to sustained contact situations.
Abstract: Haptics offers a crucial avenue for human communication during intricate physical tasks. However, despite its utmost importance in physical interactions, the touch modality is under-represented when developing intelligent systems. In this talk, I will discuss our research on haptic shared control, and illustrate examples, where haptic data inform proactive robot partners, which can take on dynamic levels of control, i.e. roles, during collaboration. The talk will demonstrate how physical conflicts and failures can be leveraged in human-robot collaborative work, especially in cases of close physical contact.
Abstract: The talk will present several control methods and interfaces for human-robot co-manipulation that enable robot adaptation to human partners. The adaptation process incorporates shared control based machine learning, human modelling, and real-time measurements to track and improve various metrics, such as task performance, human muscle fatigue, joint torques and arm manipulability. The first part of the talk will focus on the application of co-manipulation for ergonomic physical human-robot collaboration in various practical tasks (e.g., collaborative sawing, polishing, valve turning, assembly, exoskeleton assistance, etc.). The second part will examine teleoperation, where co-manipulation pertains to the remote robot being commanded by a human operator. Here, we will focus on the design of various teleimpedance interfaces and control methods to be used for different applications ranging from manufacturing to remote elderly care.
Abstract: TBD
Abstract: Recent advances in diffusion-based planning models have greatly enhanced robot behavior generation for a variety of tasks. Despite these improvements, adapting these models to new objectives or deployment constraints remains a challenge, often resulting in unsafe behaviors. In this talk, we present a new diffusion-based framework designed to address both safety and temporal constraints specified using finite linear temporal logic (LTLf). Our approach modifies the reverse diffusion process through targeted guidance, allowing for the generation of trajectories that meet LTLf specifications without the need for expert demonstrations for each new instruction. We will also discuss the current challenges in creating safe and compliant trajectories and explore opportunities for developing more reliable and trustworthy robot systems.
contributions
Accepted Posters:
Poster Session 1
Yuta Yamabata, Ayato Kanada, Yasutaka Nakashima, and Motoji Yamamoto: Mechanical Model for Storage in an Extendable Robotic Arm
Ali Ayub, Joshua Scripcaru, Kerstin Dautenhahn, and Chrystopher L. Nehaniv: Human Intent Prediction for Collaborative Carrying
Cedric Goubard and Yiannis Demiris: Cooking with Confidence: Towards Trustworthy Robotic Assistants Through Proficiency Self-Assessment
Cunjun Yu, Zhimin Hou, Yiqing Xu, and David Hsu: Play-to-Coach: Coaching Humans to Play Airhockey with Robots
Nam Phuong Dam and Van Anh Ho: Tactile Soft Robotic Skin with Changeable Structure for Full Body Interactive Soft Robot
Sangbeom Park, Taerim Yoon, Joonhyung Lee, Sunghyun Park, and Sungjoon Choi: Quality-Diversity based Semi-Autonomous Teleoperation using Reinforcement Learning
Shivam Chaubey, Francesco Verdoja, and Ville Kyrki: Jointly Learning Cost and Constraints from Demonstrations for Safe Trajectory Generation
Zheng Shen, Matteo Saveriano, Fares J. Abu-Dakka, and Sami Haddadin: Safe Execution of Learned Orientation Skills with Conic Control Barrier Functions
Chengzong Zhao, Aayush Deshpande, David Liu, Colin Bellinger, and Pengcheng Xi: An Assistive Robotic Framework for Empowering Elderly Independence
Luigi Berducci, Shuo Yang, Mirco Giacobbe, Rahul Mangharam, and Radu Grosu: Safe Learning under Assumptions in Human-Robot Systems
Robin Jeanne Kirschner, Yangcan Zhou, Jinyu Yang, Edonis Elshani, Carina M. Micheler, Tobias Leibbrand, Nader Rajaei, Rainer Burgkart, and Sami Haddadin: Towards A Full Injury Understanding in pHRI: Test Setups and Procedures to Evaluate Human Injury Severity in Collision Situations
Guillaume Lorthioir, Mehdi Benallegue, Rafael Limon Cisneros, Ixchel Ramirez: Enhancing Teleoperation in Dynamic Environments: A Novel Shared Autonomy Framework Leveraging Multimodal Language Models
Poster Session 2
Byeongho Lee, Yonghyeon Lee, Junsu Ha, and Frank C. Park: Behavior-Controllable Stable Dynamics Models in Riemannian Configuration Manifolds
Yifei Simon Shao, Tianyu Li, Shafagh Keyvanian, Pratik Chaudhari, Vijay Kumar, and Nadia Figueroa: A Dynamical System Approach to Intent Estimation and Co-Manipulation
Yi Wang, Ko Ayusawa, Eiichi Yoshida, and Gentiane Venture: Emotional Motion Planning with Learned Constraints and Weighted Cost
Chanin Eom, Dongsu Lee, and Minhae Kwon: Efficient Online Reinforcement Learning with Selective Imitation of Prior Datasets
Yunyue Wei, Chenhui Zuo, and Yanan Sui: High-Dimensional Safe Bayesian Optimization for Human-Machine Interaction
Tasbolat Taunyazov, Kelvin Lin, and Harold Soh: Towards Soft Compliant Cartesian Impedance Controller with Tactile Sensing
Mathieu Celerier, Mehdi Benallegue, and Gentiane Venture: Turing test for pHRI: Evaluation of General Human Robot Interaction
Riya Arora, Niveditha Narendranath, Sandeep S. Zachariah, Aman Tambi, Souvik Chakraborty, and Rohan Paul: Generalized Grounded Temporal Reasoning with Foundation Models for Language-guided Robot Manipulation
Christian Mele, Jhilik Bose, James Tung, and Katja Mombaur: Biofidelic Knee Mannequin Design for pHRI Evaluation of Lower-Limb Exoskeletons
Peter S. Lee, and Katja Mombaur: Human-Centred Investigations toward Comprehending Human Adaptation Behaviour to Active Lower Limb Exoskeleton Use
Hugo T. M. Kussaba, Rafael I. Cabral Muchacho, Riddhiman Laha, Luis Figueredo, Fares J. Abu-Dakka, Aude Billard, and Sami Haddadin: Enhancing non-expert demonstrations with the geodesic flow
Luca Morando and Giuseppe Loianno: Spatial Assisted Human-Drone Collaborative Navigation and Interaction through Immersive Mixed Reality
important dates
We accept submissions for contributions starting: 1 February 2024
Final Submission Deadline: 15 March 2024 (23:59 PST)
Notification of Acceptance: 5 April 2024
Workshop: 13 May 2024
schedule
9:00 AM Welcoming Remarks
9:15 AM Dana Kulić
9:45 AM Allison Okamura
10:15 AM Coffee Break: Poster Session 1
11:15 AM Mehdi Benallegue
11:45 AM Harold Soh
12:15 PM Lunch Break
1:30 PM Alexis Block
2:00 PM Round Table
3:00 PM Coffee Break: Poster Session 2
4:00 PM Luka Peternel
4:30 PM Ayse Kucukyilmaz
5:00 PM Closing Remarks
organizers
student organizers
contact
If you have any questions about this workshop, please contact:
Hisham Khalil (hisham.khalil@uwaterloo.ca)