EvaNet

Evolving Space-Time Neural Architectures for Videos

AJ Piergiovanni, Anelia Angelova, Alexander Toshev, Michael S. Ryoo

We present a new method for finding video CNN architectures that capture rich spatio-temporal information in videos. Previous work, taking advantage of 3D convolutions, obtained promising results by manually designing video CNN architectures. We here develop a novel evolutionary search algorithm that automatically explores models with different types and combinations of layers to jointly learn interactions between spatial and temporal aspects of video representations. We demonstrate the generality of this algorithm by applying it to two meta-architectures.

Further, we propose a new component, the iTGM layer, which more efficiently utilizes its parameters to allow learning of space-time interactions over longer time horizons. The iTGM layer is often preferred by the evolutionary algorithm and allows building cost-efficient networks. % The iTGM module is often preferred by the evolution algorithm and allows building both more accurate and faster networks.

The proposed approach discovers new and diverse video architectures that were previously unknown. More importantly they are both more accurate and faster than prior models, and outperform the state-of-the-art results on four datasets: Kinetics, Charades, Moments in Time and HMDB.

@inproceedings{evanet,

title={Evolving Space-Time Neural Architectures for Videos},

author={Piergiovanni, AJ and

Angelova, Anelia and

Toshev, Alexander and

Ryoo, Michael S.},

booktitle={ICCV},

year={2019}

}