Learning from synthetic data is popular in a variety of robotic vision tasks such as object classification and detection, because large amount of data can be generated without annotations by humans. However, when relying only on synthetic data, we encounter the well-known problem of the simulation-to-reality (Sim-to-Real) gap, which is hard to resolve completely in practice. For such cases, real human-annotated data is necessary to bridge this gap, and in our work we focus on how to acquire this data efficiently. Therefore, we propose a Sim-to-Real pipeline that relies on deep Bayesian active learning and aims to minimize the manual annotation efforts. We devise a learning paradigm that autonomously selects the data that is considered useful for the human expert to annotate. To achieve this, a Bayesian Neural Network (BNN) object detector providing reliable uncertain estimates is adapted to infer the informativeness of the unlabeled data, in order to perform active learning. To further mitigate the issue of misalignment of label distribution when purely considering uncertainty sampling, we develop an effective randomized sampling strategy that performs favorably against other complex alternatives. In our experiments on classification and object detection tasks, we show the superior performance of our idea and provide evidence that labeling efforts required can be reduced to a small amount. Furthermore, we demonstrate the practical effectiveness of this idea in a grasping task on an assistive robot.
Real and Sim Exampler Images
Image Classification
2D Object Detection
@inproceedings{feng2022bayesian,
title={Bayesian active learning for sim-to-real robotic perception},
author={Feng, Jianxiang and Lee, Jongseok and Durner, Maximilian and Triebel, Rudolph},
booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={10820--10827},
year={2022},
organization={IEEE}
}