Self-supervised Transparent Liquid Segmentation for Robotic Pouring

Gautham Narasimhan1, Kai Zhang2, Ben Eisner1, Xingyu Lin1, David Held1


1 Carnegie Mellon University, 2 University of Notre Dame

IEEE Conference on Robotics and Automation (ICRA) 2022


Paper Supplementary Material Code

Work covered by: CMU SCS News (Pour me a Glass), Voice of America


Abstract

Liquid state estimation is important for robotics tasks such as pouring; however, estimating the state of transparent liquids is a challenging problem. We propose a novel segmentation pipeline that can segment transparent liquids such as water from a static, RGB image without requiring any manual annotations or heating of the liquid for training. Instead, we use a generative model that is capable of translating unpaired images of colored liquids into synthetically generated transparent liquid images. Segmentation labels of colored liquids are obtained automatically using background subtraction. We use paired samples of synthetically generated transparent liquid images and background subtraction for our segmentation pipeline. Our experiments show that we are able to accurately predict a segmentation mask for transparent liquids without requiring any manual annotations. We demonstrate the utility of transparent liquid segmentation in a robotic pouring task that controls pouring by perceiving liquid height in a transparent cup.

Image Translation

Transparent Liquid Segmentation

Segmentation generalization to novel (unseen) containers and backgrounds

Robotic Pouring System

Team

Acknowledgements

This material is based upon work supported by LG Electronics and National Science Foundation under Grant No. IIS-2046491.