Realtime Interactive Scene Style Transfer
Aman Tiwari, Tong Bai, Giada Sun
Photographic Video Rendering of Procedurally Generated Cities with Pix2pix
Lingdong Huang, Hizal Celik, Shouvik Mani
This project combines the generative power of pix2pix and pix2pixHD with a quirky, procedurally-generative city in Unity, where the textures are the semantic labels of the classic “cityscape-dataset” and the inhabitants roam freely (and aimlessly)! By combining this generative city with the trained models for pix2pix or pix2pixHD, the result is a photo-realistic video rendering of a nonexistent metropolis bustling with life, or a comical artistic rendering of a cubist world, depending on how much you’re squinting at your screen.
BirdGAN: A Dream & Nightmare-Like Representation of the Silly Creatures We Call Birds
Oscar Dadfar, Hai Pham, Yang Yang
BirdGAN wanted to produce dream-like & nightmare-like birds. Usually, depending on the viewer, their appreciation for the silly creatures influences their perception of birds as dream- like or nightmare-like. We wanted to produce images of bird that, despite the viewer’s preference for birds, would still agree with others as to whether an image was truly a dream or nightmare representation of birds.
Machine Learning Iterations of Yoko Ono’s Scripts
Shenghui Jia, Zheng Jiang, Nico Zevallos
We used Shutterstock images of “salad” as our data set because these images serve as the canonically representative example of almost absurdly generic photos that are free to use only because of the garishly unignorable watermark defacing the images. The generation of a new set of images further deepens the irony because more stock images of salad is surely unnecessary and only serves to augment the already excessive supply of these uninspired images. However, the possibly formulaic nature of these images makes this data set particularly interesting in that the replication and reproduction of believable stock images of salad should be achievable through training with artificial neural networks. Though the concept underlying our work pushes the mind to think about the delicate balance between ethicality and legality when considering originality and ownership, the art is ultimately meant to be viewed with humor and appeal to the public by approaching this significant and weighty concept with whimsy in a light-hearted manner.
Confronted: A VR Experience
Zaria Howard, Tatyana Mustakos, Char Stiles
What would happen if you were meant to face the scary AI-laced creations that you make? We wanted to recontextualize the sketch2face research, to reimagine the relationship between the sketcher and the sketched humans. This manifests as a sketch2face implementation in VR.
Laa Laa vs. the “Computerized” World
Jeena Yin, Hanyuan Zhang, Joey Gibli
“Dream of Dance Dad” is a short animated film created from our hand drawings and pix2pix model. We inverted the colors in post-processing to create a neon-in-the-dark image. The audio is recorded in three channels of Ken voice-acting as a small human dog-child-man, deep in a REM-like state, reminiscing about a past life as a sailor-child with an indomitable passion for dance. The image frames were then drawn, processed through pix2pix, and stitched together with the audio. The imagery is meant to evoke the simple childlike absurdity of the narrative. While we originally only thought to draw stick figure dancers, experimenting with other forms produced interesting images as well.
DeepCloud- Episode 0.1: Of Hands and Points
Pedro Veloso, Ardavan Bidgoli
After spending several weeks getting exposed to different machine learning methods and their respective strengths, I felt a strong affinity towards using these methods to assist an artistic process. Currently different methods are great for different things: emulating style, producing texture, even generating human figures. I saw a lack of work focused on generating compositions so I chose that as the emphasized part of a creation process.