ALBA: Reinforcement Learning for VOS

Abstract

We consider the challenging problem of zero-shot video object segmentation (VOS). That is, segmenting and tracking multiple moving objects within a video fully automatically, without any manual initialization. We treat this as a grouping problem by exploiting object proposals and making a joint inference about grouping over both space and time. We propose a network architecture for tractably performing proposal selection and joint grouping. Crucially, we then show how to train this network with reinforcement learning so that it learns to perform the optimal non-myopic sequence of grouping decisions to segment the whole video. Unlike standard supervised techniques, this also enables us to directly optimize for the non-differentiable overlap-based metrics used to evaluate VOS. We show that the proposed method, which we call ALBA outperforms the previous state of-the-art on three benchmarks: DAVIS 2017, FBMS and YouTube-VOS.

Code: https://github.com/kini5gowda/ALBA-RL-for-VOS

Authors: Shreyank N Gowda *, Panagiotis Eustratiadis *, Timothy Hospedales, Laura Sevilla-Lara (* denotes equal contribution)

The full paper can be found here.

If you find our work useful please cite:

@article{gowda2020alba,

title={ALBA: Reinforcement Learning for Video Object Segmentation},

author={Gowda, Shreyank N and Eustratiadis, Panagiotis and Hospedales, Timothy and Sevilla-Lara, Laura},

journal={arXiv preprint arXiv:2005.13039},

year={2020}

}


Questions?

Contact s1960707@ed.ac.uk to get more information about the project