NST models is an optimization technique that uses Convolutional Neural Networks (CNN) to extract statistics from the images, in which the style exemplar and image content are separated and then recombined according to those statistics. The output is a blend of the two images.
I used Brycen Westgarth and Tristan Jogminas’s NST Transition Video Processing through Google Colab to generate the animation, directly using the source exemplar and video.
I tried using their Github version first but ran into a lot of issues because their Github hasn't been updated since 2021 whereas I'm using more updated versions of python and its modules, for example. After a few hours of trying to troubleshoot, I tried their Google Colab version, which worked because the environment was already ready to be run. For the most part, I just ran the cells in order with a few changes. Pretty streamlined.
When I tried using one source exemplar and changed the corresponding list in the code, I got index issues. To bypass this, I changed the code back and instead put three of the same source exemplar images into the style_ref folder.
Then, I ran the original video which I uploaded directly from my Desktop. The program ran through it frame by frame.
Once the video was uploaded, I changed some of the configurations to match my desired output. Below is the code containing the configurations I used. I changed the frame height to 500 pixels, fps to 29 to stay consistent with the other videos, kept the STYLE_SEQUENCE list which refers to the aforementioned style_ref folder, and changed PRESERVE_COLORS to FALSE so that the colors from the original video are maintained.
After this step, I ran the NST step, the program created the video, and I downloaded the output.
This is the NST output that resulted and is shown on the Home Page.