AssembleNet is a “family” of learnable architectures that provide a generic approach to learn the “connectivity” among feature representations across input modalities, while being optimized for the target task. We introduce a general formulation that allows representation of various forms of multi-stream CNNs as directed graphs, coupled with an efficient evolutionary algorithm to explore the high-level network connectivity. The objective is to learn better feature representations across appearance and motion visual clues in videos. Unlike previous hand-designed two-stream models that use late fusion or fixed intermediate fusion, AssembleNet evolves a population of overly-connected, multi-stream, multi-resolution architectures while guiding their mutations by connection weight learning. We are looking at four-stream architectures with various intermediate connections for the first time — 2 streams per RGB and optical flow, each one at different temporal resolutions.
@inproceedings{assemblenet,
title={AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures}
author={Ryoo, Michael S. and
Piergiovanni, AJ and
Mingxing, Tan and
Angelova, Anelia},
booktitle={ICLR},
year={2020}
}
We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network. A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or input modality. Even without pre-training, our models outperform the previous work on standard public activity recognition datasets with continuous videos, establishing new state-of-the-art. We also confirm that our findings of having neural connections from the object modality and the use of peer-attention is generally applicable for different existing architectures, improving their performances. We name our model explicitly as AssembleNet++.
@inproceedings{assemblenetplusplus,
title={AssembleNet++: Assembling Modality Representations via Attention Connections},
author={Ryoo, Michael S. and
Piergiovanni, AJ and
Kangaspunta, Juhana and
Angelova, Anelia},
booktitle={ECCV},
year={2020}
}
The code for both AssembleNet and AssembleNet++ will be posted here soon, during the week of ECCV 2020.
https://github.com/google-research/google-research/tree/master/assemblenet