CAIAC: Causal Influence Aware Counterfactual Data Augmentation 

Núria Armengol Ur¹², Marco Bagatella¹², Marin Vlastelica¹, Georg Martius³¹

¹Max Planck Institute for Intelligent Systems, Tübingen, Germany

²Department of Computer Science, ETH Zurich, Switzerland

³Department of Computer Science, University of Tübingen, Tübingen, Germany


TLDR: Data augmentation method that creates counterfactual samples to increase robustness of offline learning methods against distributional shift. 


Offline data are both valuable and practical resources for teaching robots complex behaviors. Ideally, learning agents should not be constrained by the scarcity of available demonstrations, but rather generalize beyond the training distribution. However, the complexity of real-world scenarios typically requires huge amounts of data to prevent neural network policies from picking up on spurious correlations and learning non-causal relationships.  We propose CAIAC, a data augmentation method that can create feasible synthetic transitions from a fixed dataset without having access to online environment interactions. By utilizing principled methods for quantifying causal influence, we are able to perform counterfactual reasoning by swapping action-unaffected parts of the state-space between independent trajectories in the dataset. We empirically show that this leads to a substantial increase in robustness of offline learning algorithms against distributional shift.

Experimental results in Franka-Kitchen

Microwave task performance under distributional shift

mw_caiac_2.mp4

CAIAC

mw_coda_2.mp4

CoDA

Kettle task performance under distributional shift

kettle_caiac_2.mp4

CAIAC

kettle_coda_2.mp4

CoDA

Bottom burner task performance under distributional shift

bb_caiac_2.mp4

CAIAC

bb_coda_2.mp4

CoDA

Experimental results in Fetch-Pick&Lift

fpp_easy_2.mp4

Nominal task

fpp_caiac_2.mp4

CAIAC

fpp_rsc_2.mp4

RSC