4 Minute Overview
Human-robot handover is a fundamental yet challenging task in human-robot collaboration and HRI. Recently, remarkable progressions have been made in human-to-robot handovers of unknown objects by using learning-based grasp generators. However, how to responsively generate smooth motions to take an object from a human is still an open question. In particular, previous work did not look at how to plan motions that consider human comfort as a part of the human-robot handover process. Here, we propose to generate smooth motions via an efficient model-predictive control (MPC) framework that integrates perception and complex domain-specific constraints into the optimization problem. We introduce a learning-based grasp reachability model to select candidate grasps which maximize the robot's manipulability, giving it more freedom to satisfy these constraints. Finally, we integrate a neural net force/torque classifier that detects contact events from noisy data. We compare our system with prior work on a diverse set of objects through a user study (N=4) and performed a systematic evaluation of each module. The study shows that the users preferred our MPC approach over the baseline system by a large margin.
Contact Event Detection
To better coordinate the timing for the physical handover phase, we trained a feed-forward neural network to detect the contact event between the hand/object and the robot gripper.
We collected a physical contact dataset mimicking the handover procedures: we moved the robot gripper to a random position in the workspace. When the robot was about to reach the target pose, we pushed/pulled on the robot gripper, and pressed a key to label the physical contact. We recorded joint positions, velocities, efforts, forces, and torques during the whole procedure.
An example of the captured force data. Gray vertical lines denote the moment when contact occurs. Blue circles illustrate the detected contact by our model.
Human Collision Avoidance
To avoid collisions between the robot and a user, we add a collision cost in our MPC that penalizes trajectories that are in collision with a user's hand (seen as red lines in the video).
Experiment 1: Handover Locations
We handed over three objects at three locations: left, center, and right. For each location, we handed over the object three times with three ways of holding the object.
Experiment 2: Reactivity to Object Orientation
We further investigate the performance by rotating the object along it’s standing axis 45 degrees after the robots starts moving.