Simultaneous Material Segmentation and 3D Reconstruction in Industrial Scenarios
Recognising material categories is one of the core challenges in robotic nuclear waste decommissioning. All nuclear waste should be sorted and segregated and according to their materials, then different disposal post-process can be applied. In this paper, we proposed a novel transfer learning approach to learn boundary-aware material segmentation from meta-dataset and weakly annotated data. The proposed method is data-efficient which leverages publically available dataset for general computer vision tasks and coarsely labelled material recognition data. And the limited number of fine pixel-wise annotations are required. Importantly, our approach is integrated with a Simultaneously Localisation and Mapping (SLAM) system to fuse the per-frame understating delicately into a 3D global semantic map to facilitate robot manipulation with self-occluded object heap or robot navigation in the disaster zones. We evaluate the proposed method on Materials in Context dataset over 23 categories and our integrated system delivers a quasi-real-time 3D semantic mapping with high-resolution images. The trained model is also verified in an industrial environment as part of the EU RoMaNs project and promising qualitative results are presented.
The pipeline of proposed system
The architecture of proposed network
The download link of preprocessed MINC dataset: