Improving Behavioural Cloning with Positive Unlabeled Learning

Anonymous author for double blind reviewing

Abstract

Learning control policies offline from pre-recorded datasets is a promising avenue for solving challenging real-world problems. However, available datasets are typically of mixed quality, with a limited number of the trajectories that we would consider as positive examples; i.e., high-quality demonstrations. Therefore, we propose a novel iterative learning algorithm for identifying expert trajectories in unlabeled mixed-quality robotics datasets given a minimal set of positive examples, surpassing existing algorithms in terms of accuracy. We show that applying behavioral cloning to the resulting filtered dataset outperforms several competitive offline reinforcement learning and imitation learning baselines. We perform experiments on a range of simulated locomotion tasks and on two challenging manipulation tasks on a real robotic system; in these experiments, our method showcases state-of-the-art performance.

PU learning results

Our method can accurately separate expert data in a wide range of complex mixed-quality datasets. When compared to traditional baseline methods such as Unbiased PU and Non-negative PU, our method consistently outperforms them, demonstrating superior performance:

PUBC performance

We utilize a filtered expert subset from a mixed dataset to train a Behavioral Cloning (BC) agent. Our policy learning method, known as Positive Unlabeled Behavioral Cloning (PUBC), outperforms all baseline algorithms, including advanced offline Reinforcement Learning (RL) techniques, in terms of performance:

Our PUBC is most prominent in solving the challenging real robotic manipulation task, which all other baseline methods cannot effectively solve. Below is a video demo: