Artificial Imagination: Deconstructing Style

Aim and Goals

The aim of this project follows the motivations of providing one answer to the complicated questions we see today about AI art: 

The purpose of Artificial Imagination is to take a look into the interaction between my own self-made art and that of a machine interface, trained solely with my own works. It cannot exist without my own input, as I must always give it an input image for it to then transform, as I view AI like any other tool: a tool that can be used in artistic contexts. They are not meant to replace the people making the works.

I hope that in these explorations of transforming images based on my already existing work that we can get more insight into the human-machine interface, not only in terms of us trying to train the machine, but also how we take the outputs from the machine to influence our own works, in this circular motion. Rather than looking vertically at the advancement of AI generated text-to-image models popularly seen today, I challenge the audience to think more laterally: how can we involve certain processes to not only inspire us, but even to learn in ways humans cannot access by themselves?

As such, when it comes to exhibition, I hope to share more than just transformed images: I hope to be present and create pieces live, even printing them out to hang on the wall behind me. I also hope that a durational aspect can be seen here--because of the nature of training and datasets, I hope at one point to cut off the model from outside art, to allow it to evolve on its own generations, now purely interfacing with my own drawn inputs, and its own outputs.

Conceptual

Example Applications of Pix2Pix 

(Phillip Isola et al, 2017)

In short summary, this evolving piece involves the use of an image-to-image generative adversarial network known as Pix2Pix. Pix2Pix, in essence, needs an input and output image to train on initially, then when the user gives it a new piece of input, it will try to bridge the gap based on the dataset it was trained on. As such, one could see this in a sense similar to mapping or translation, with examples in the original paper defined with, say, transforming Google Earth satellite images to their Map counterparts.

As such, I map my own art into the dataset, fully knowing the limitations and biases of machine learning, such as the need for a large dataset but also a very good variety in that said dataset. However, what if we flip this on its head? What if embrace the imperfections and run with the interesting results from it?

For example, I am mostly a character artist: training it to translate sketch to full piece may have some interesting results if I, say, try to tackle abstract art in my sketches. How would one even go about sketching abstractions? In a sense, I wish to use my character driven mindset and push it through a machine to think in ways I may find alien or hard to understand. In doing this too, I marry the idea of abstraction in art to the concept of abstraction in computing.