Learning Linear Transformations
for Fast Image and Video Style Transfer
Xueting Li*1 Sifei Liu*2 Jan Kautz2 Ming-Hsuan Yang1,3
1Univerisity of California, Merced 2Nvidia 3 Google Cloud
Abstract
Given a random pair of images, a universal style transfer method extracts the feel from a reference image to synthesize an output based on the look of a content image. Recent algorithms based on second-order statistics, however, are either computationally expensive or prone to generate artifacts due to the trade-off between image quality and runtime performance. In this work, we present an approach for universal style transfer that learns the transformation matrix in a data-driven fashion. Our algorithm is efficient yet flexible to transfer different levels of styles with the same auto-encoder network. It also produces stable video style transfer results due to the preservation of the content affinity. In addition, we propose a linear propagation module to enable a feed-forward network for photo-realistic style transfer. We demonstrate the effectiveness of our approach on three tasks: artistic style, photo-realistic and video style transfer, with comparisons to state-of-the-art methods.
Paper
Code [Github link]
Poster [PDF link]
More Results

Video style transfer results by using a shallower auto-encoder (ReLU 31)

Video style transfer results by using a deeper auto-encoder (ReLU 51)
Bibtex
@inproceedings{li2018learning,
author = {Li, Xueting and Liu, Sifei and Kautz, Jan and Yang, Ming-Hsuan},
title = {Learning Linear Transformations for Fast Image and Video Style Transfer},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
year = {2019}
}