Learn2Augment: Learning to Composite Videos for Data Augmentation in Action Recognition

Abstract

We address the problem of data augmentation for video action recognition. Standard augmentation strategies in video are hand designed and sample the space of possible augmented data points either at random, without knowing which augmented points will be better, or through heuristics. We propose to learn what makes a “good” video for action recognition and select only high-quality samples for augmentation. In particular, we choose video compositing of a foreground and a background video as the data augmentation process, which results in diverse and realistic new samples. We learn which pairs of videos to augment without having to actually composite them. This reduces the space of possible augmentations, which has two advantages: it saves computational cost and increases the accuracy of the final trained classifier, as the augmented pairs are of higher quality than average. We present experimental results on the entire spectrum of training settings: few-shot, semi-supervised and fully supervised. We observe consistent im-provements across all of them over prior work and baselines on Kinetics, UCF101, HMDB51, and achieve a new state-of-the-art on settings with limited data. We see improvements of up to 8.6% in the semi-supervised setting.

Authors: Shreyank N Gowda , Marcus Rohrbach, Frank Keller, Laura Sevilla-Lara

The paper has been accepted in ECCV-22! The full paper can be found here.

If you find our work useful please cite:

@article{gowda2022learn2augment,

title={Learn2Augment: Learning to Composite Videos for Data Augmentation in Action Recognition},

author={Gowda, Shreyank N and Rohrbach, Marcus and Keller, Frank and Sevilla-Lara, Laura},

journal={arXiv preprint arXiv:2206.04790},

year={2022}

}



Questions?

Contact s1960707@ed.ac.uk to get more information about the project