SegFlow: Joint Learning for Video Object Segmentation and Optical Flow

Jingchun Cheng Yi-Hsuan Tsai Shengjin Wang Ming-Hsuan Yang

Tsinghua University University of California, Merced

Abstract

This paper proposes an end-to-end trainable network, SegFlow, for simultaneously predicting pixel-wise object segmentation and optical flow in videos. The proposed SegFlow has two branches where useful information of object segmentation and optical flow is propagated bidirectionally in a unified framework. The segmentation branch is based on a fully convolutional network, which has been proved effective in image segmentation task, and the optical flow branch takes advantage of the FlowNet model. The unified framework is trained iteratively offline to learn a generic notion, and fine-tuned online for specific objects. Extensive experiments on both the video object segmentation and optical flow datasets demonstrate that introducing optical flow improves the performance of segmentation and vice versa, against the state-of-the-art algorithms.

Downloads

"SegFlow: Joint Learning for Video Object Segmentation and Optical Flow", Jingchun Cheng, Yi-Hsuan Tsai, Shengjin Wang and Ming-Hsuan Yang, IEEE International Conference on Computer Vision (ICCV), 2017

[preprint PDF] [Supplementary] [GitHub]


Download our segmentation results on DAVIS 2016


BibTex

@inproceedings{Cheng_ICCV_2017,
author = {J. Cheng and Y.-H. Tsai and S. Wang and M.-H. Yang},
booktitle = {IEEE International Conference on Computer Vision (ICCV)},
title = {SegFlow: Joint Learning for Video Object Segmentation and Optical Flow},
year = {2017}}