LEGS: Learning Efficient Grasping Sets for Exploratory Grasping
Leitan Fu, Michael Danielczuk, Ashwin Balakrishna, Daniel Brown, Jeffrey Ichnowski, Eugen Solowjow, Ken Goldberg
Leitan Fu, Michael Danielczuk, Ashwin Balakrishna, Daniel Brown, Jeffrey Ichnowski, Eugen Solowjow, Ken Goldberg
Abstract— While deep learning has enabled significant progress in designing general purpose robot grasping systems, there remain objects which still pose challenges for these systems. Recent work on Exploratory Grasping has formalized the problem of systematically exploring grasps on these adversarial objects and explored a multi-armed bandit model for identifying high-quality grasps on each object stable pose. However, these systems are still limited to exploring a small number or grasps on each object. We present Learned Efficient Grasp Sets (LEGS), an algorithm that efficiently explores thousands of possible grasps by maintaining small active sets of promising grasps and determining when it can stop exploring the object with high confidence. Experiments suggest that LEGS can identify a high-quality grasp more efficiently than prior algorithms which do not use active sets. In simulation experiments, we measure the gap between the success probability of the best grasp identified by LEGS, baselines, and the most-robust grasp (verified ground truth). After 3000 exploration steps, LEGS outperforms baseline algorithms on 10/14 and 25/39 objects on the Dex-Net Adversarial and EGAD! datasets respectively. We then evaluate LEGS in physical experiments; trials on 3 challenging objects suggest that LEGS converges to high-performing grasps significantly faster than baselines.
LEGS is a new algorithm for exploratory grasping where the objective is to repeatedly attempt grasps, dropping on success, and leverage online experience to update estimates of grasp success probability and decide which grasp to select next. The key insight in our work is to utilize a combination of priors from a general purpose grasping system and online grasping trials to maintain confidence bounds on grasp success probabilities. LEGS uses these confidence bounds to (1) rapidly filter out grasps from consideration during exploration if they have very low probability of being near-optimal grasps and (2) decide when to stop exploring.
We evaluate LEGS on challenging objects from the Dex-Net Adversarial and EGAD! datasets and find that LEGS is able to consistently find more optimal grasps than baseline algorithms.
We evaluate early stopping over the Dex-Net Adversarial object set in simulation when using a range of stopping thresholds. All results use a 95%-confidence lower bound on expected grasp robustness. Left: We plot the accuracy averaged over all objects and find that our empirical lower bound is highly accurate across all stopping thresholds. Right: We plot the number of steps before stopping, averaged across all objects. Intuitively, the required exploration time increases with higher performance thresholds. Importantly, the average number of steps before stopping is much lower on average than the maximum 3000 step horizon, even for high stopping thresholds.
How does LEGS perform on physical objects? We present a self-supervised grasping system, where the robot repeatedly attempts to grasp unfamiliar any given objects. The system runs can run for 10+ hours without human intervention.