Figure 1. Comparisons of affordance detection. The first two rows show that our method outperform the original AffordanceNet[1]and can avoid the noisy fragments in affordance maps. The third row shows that our hierarchical AffordanceNet may lose some fine-grained information as it involves the high-level affordance classification.
Figure 2. Samples of baseline comparison. We view those point clouds from different orientation to clearly compare the reconstruction of the "contain" regions. As is shown in this figure, our method can achieve stable reconstruction of transparent objects, especially the inner the surfaces of cups.
Table I. Baseline comparisons on different evaluation regions
Table II. Comparison on success rate of different manipulation tasks. Compared to ClearGrasp method, our approach with the ability to reconstruct “contain” regions accurately, can improve the success rates of the stacking task from 57.5% to 87.5%, and the pouring task from 15% to 80%.