DexPilot: Vision Based Teleoperation of  Dexterous Robotic Hand-Arm System

Paper link: http://arxiv.org/abs/1910.03135

Ankur Handa, Karl Van Wyk, Wei Yang, Jacky Liang, Yu-Wei Chao, Qian Wan, Stan Birchfield, Nathan Ratliff, Dieter Fox

Accepted to IEEE Conference in Robotics and Automation (ICRA), Paris, Virtual, 2020

Highlights of the results

A selected list of demonstrations is shown on the left. Two pilots trained themselves to perform these tasks. The amount of training required is minimal  --- pilots did 3-4 consecutive trails to warm-up before actually teleoperating the robot for the task --- and that it is possible to train new pilots quickly. 

The teleoperation is driven by line of sight and no tactile feedback is relayed to the pilot. Despite that the pilots were able to perform a variety of tasks. Tactile feedback will only improve the overall process and is an important direction for future work. 

Most importantly, we are able to show that robust hand tracking using neural networks can be achieved in the studio and it allows us to do teleoperation over long durations. 

Task: extract paper currency from a closed wallet (pilot 1 and pilot 2 alternate)

On the left, the pilots are able to teleoperate the robot hand-arm system for a challenging multi-step long-horizon task of pulling out paper currency from the wallet lying closed on the table.  The hand is able to hold on to objects as thin as a paper between the fingers showing the robustness of kinematic retargeting.

DexPilot also enables two different pilots to alternate without requiring any additional calibration or change in the set-up. This feature was leveraged for this long horizon task to minimize pilot fatigue and discomfort.  

Below are more examples of teleoperation with the DexPilot system. As shown, the system is sufficiently dexterous to solve tasks that require prehensile and non-prehensile manipulation, precision and power grasping, in-hand manipulation, and finger gaiting. Ultimately, rich sensorimotor responses are produced that could be used in the future to learn autonomous policies for complex tasks.

Task: pick foam brick and place inside red bowl

Task: pour beads into the bowl

Task: insert concentrically smaller cups

Task: pick Pringles can and place inside red bowl

Task: open tea drawer, extract tea bag, close tea drawer

Task: slide card to edge of box, grasp and place on table 

Task: flip box by 90 Degrees and place at goal location

Task: open peanut jar and place lid on table

Task: stack red block on blue block and yellow block on red block

Task: open plastic container, remove and open box

Task: pick brick, rotate it 180 degrees and place down

Task: insert peg into corresponding hole on the NIST task board