Learning from Demonstration with
Weakly Supervised Disentanglement
Abstract: Robotic manipulation tasks, such as wiping with a soft sponge, require control from multiple rich sensory modalities. Human-robot interaction, aimed at teaching robots, is difficult in this setting as there is potential for mismatch between human and machine comprehension of the rich data streams. We treat the task of interpretable learning from demonstration as an optimisation problem over a probabilistic generative model. To account for the high-dimensionality of the data, a high-capacity neural network is chosen to represent the model. The latent variables in this model are explicitly aligned with high-level notions and concepts that are manifested in a set of demonstrations. We show that such alignment is best achieved through the use of labels from the end user, in an appropriately restricted vocabulary, in contrast to the conventional approach of the designer picking a prior over the latent variables. Our approach is evaluated in the context of a table-top robot manipulation task performed by a PR2 robot -- that of dabbing liquids with a sponge (forcefully pressing a sponge and moving it along a surface). The robot provides visual information, arm joint positions and arm joint efforts.
Multimodal Demonstration Dataset: Link
Motivation & Experimental Setup

Model Architecture Training & Testing

Results & Conclusions

Physical setup for teleoperating a PR2 end-effector through an HTV Vive controller.
Example Pouring Demonstrations from the side and from the robot's PoV
pour in red cup
pour in red cup
pour in blue cup
pour in blue cup
pour from behind
pour from behind
pour sideways
pour sideways
pour partially
pour partially
pour fully
pour fully
Example Dabbing Demonstrations from robot's PoV
press behind
press in front
press hard
press slowly
press quickly