Using dVRK Teleoperation to Facilitate Deep Learning of Automation Tasks for an Industrial Robot
Jacky Liang, Jeffrey Mahler, Michael Laskey, Pusong Li, and Ken Goldberg
Conference on Automation Science and Engineering 2017
Finalist, Best Student Paper Award
Abstract
Deep Learning from Demonstrations (Deep LfD) is a promising approach for robots to perform bilateral automation tasks, such as tasks involving dynamic contact and deformation, where dynamics are difficult to model explicitly. Deep LfD methods typically require substantial datasets of 1) videos of humans which do not match robot kinematics and capabilities or 2) waypoints collected with tedious move-and-record interfaces such as teaching pendants or kinesthetic teaching. We explore an alternative using the Intuitive Surgical da Vinci, where a pair of gravity-balanced high-precision passive 6-DOF master arms is combined with stereo vision allowing humans to perform precise surgical automation tasks with slave arms. We present DY-Teleop, an interface between the da Vinci master manipulators and an ABB YuMi industrial robot to facilitate the collection of time-synchronized images and robot states for deep learning of automation tasks involving deformation and dynamic contact. We also present YuMiPy, an open source library and ROS package for controlling an ABB YuMi over Ethernet. Experiments with scooping a ball into a cup, pipetting liquid between two containers, and untying a knot in a rope suggest that demonstrations obtained using DY-Teleop are 1.8X as effective for LfD than those obtained using kinesthetic teaching.
Data
Data for demonstrations collected for all three tasks can be downloaded in the link above. Data includes joint angles, end-effector poses, and torques for both arms and the corresonding webcam RGB images taken overhead. All data points have a corresponding timestamp. Specifically for the scooping task we provide hand-labeled start and stop frames of the given demonstrations.
Videos