Núria Armengol Urpí¹², Marco Bagatella¹², Marin Vlastelica¹, Georg Martius³¹
Published in ICML 2024.
TLDR: Data augmentation method that creates counterfactual samples to increase robustness of offline learning methods against distributional shift.
Offline data are both valuable and practical resources for teaching robots complex behaviors. Ideally, learning agents should not be constrained by the scarcity of available demonstrations, but rather generalize beyond the training distribution. However, the complexity of real-world scenarios typically requires huge amounts of data to prevent neural network policies from picking up on spurious correlations and learning non-causal relationships. We propose CAIAC, a data augmentation method that can create feasible synthetic transitions from a fixed dataset without having access to online environment interactions. By utilizing principled methods for quantifying causal influence, we are able to perform counterfactual reasoning by swapping action-unaffected parts of the state-space between independent trajectories in the dataset. We empirically show that this leads to a substantial increase in robustness of offline learning algorithms against distributional shift.
Microwave task performance under distributional shift
Kettle task performance under distributional shift
Bottom burner task performance under distributional shift
If you use our work or some of our ideas, please consider citing us :)