Visual comparison of poking region segmentation results using different loss functions. The top two rows and the bottom two rows compare the results on synthetic dataset and real-world dataset, respectively. As shown in the figure, our PN and LPN based methods generate much better poking regions, compared to the vanilla loss and weighted loss.
Examples of the poking points generated with the bounding box, the mask region, the poking region with vanilla Mask R-CNN loss (Original) and the poking region with our PN loss (Ours). The red colour and blue dot represent the segmentation results and generated poking points, respectively. As shown in the figure, it can be found that the poking region based method with our proposed PN loss can lead to better tactile poking.
Examples of the successful and failed grasps. (a) A successful grasp contributed by the poking region predicted with our PokePreNet. (b) A failure grasp caused by bad poking region segmentation when using the vanilla cross-entropy loss. The poking region, grasp proposal and snapshots of grasping are shown for each case.