IOSG: Image-driven Object Searching and Grasping
Publication
IOSG: Image-driven Object Searching and Grasping [arXiv]
Houjian Yu, Xibai Lou, Yang Yang and Changhyun Choi
Our paper has been accepted for IEEE/RSJ IROS 2023
Abstract
When robots retrieve specific objects from cluttered scenes, such as home and warehouse environments, the target objects are often partially occluded or completely hidden. Robots are thus required to search, identify a target object, and successfully grasp it. Preceding works have relied on pre-trained object recognition or segmentation models to find the target object. However, such methods require laborious manual annotations to train the models and even fail to find novel target objects. In this paper, we propose an Image-driven Object Searching and Grasping (IOSG) approach where a robot is provided with the reference image of a novel target object and tasked to find and retrieve it. We design a Target Similarity Network that generates a probability map to infer the location of the novel target. IOSG learns a hierarchical policy; the high-level policy predicts the subtask type, whereas the low-level policies, explorer and coordinator, generate effective push and grasp actions. The explorer is responsible for searching the target object when it is hidden or occluded by other objects. Once the target object is found, the coordinator conducts target-oriented pushing and grasping to retrieve the target from the clutter. The proposed pipeline is trained with full self-supervision in simulation and applied to a real environment. Our model achieves a 96.0% and 94.5% task success rate on coordination and exploration tasks in simulation respectively, and 85.0% success rate on a real robot for the search-and-grasp task.
The goal of IOSG is to find the target object queried by a reference image and to successfully grasp it. IOSG is challenging as the target object can be partially or fully occluded by other objects and there could be multiple confusing objects having similar shapes or colors to the target object, perplexing perception and decision-making processes.
Image-driven Object Searching and Grasping (IOSG) Pipeline. The robot system is provided with a reference image I_t of the novel target object. The target similarity network takes the object segments and the reference image I_t as the input and outputs a similarity projection map S. The high-level policy predicts the subtask type. The deep Q-Network encodes perception inputs and predicts push and grasp Q maps. Based on the prediction from the high-level policy, the low-level policies use the motion Q maps and domain knowledge to search for the target object or coordinate between push and grasp to remove the target from the clutter.