AI-Generated Imagery

Concept & Inspiration

In the summer of 2022, AI image-generation tools (e.g., Dall-E, Midjourney, Stable Diffusion) started to receive extensive media coverage after they were used to create an image that won the Colorado State Fair’s digital arts competition. These text-to-image tools have introduced a new pathway for creating art and have challenged traditional notions of authorship and authenticity.


The DigiFab group was inspired by those conversations and wanted to investigate the AI interface as a way of exploring concepts and imagery. The AI interface provides a new digital tool for creative practice. Is it a cheat? Or is it a tool to push one's creativity? We decided to explore these programs and create images ourselves, to see what these can do and how they might be useful.

"Apps like DALL-E 2 and Midjourney are built by scraping millions of images from the open web, then teaching algorithms to recognize patterns and relationships in those images and generate new ones in the same style."


Source: New York Times

Creation Process

To create this digital piece, each one of the App DigiFab artists and designers experimented with Midjourney to create a series of images related to their individual pieces. In keeping with the theme of Transformations, each image hybridized two different subjects, which were expressed in the form of text prompts that were used in Midjourney to generate new artwork.


Once the prompt is introduced, the software combs through billions of images and synthesizes a new image based on the prompt. In the prompt, the user describes the kinds of imagery to use, adds descriptive adjectives, defines technical details such as the aspect ratio, and can specify the rendering style (e.g., graphite drawing vs. photo-realism).


Typically, Midjourney will generate 4 different image variations based on the prompt. Then, the user can create iterative variations of any of those four and/or enlarge them to create a more detailed/hi-res image.

As a result of the experimentation process with Midjourney, each artist selected a group of the images that best represented their work and the prompts used to generate the images. These were compiled and shared with the rest of the group to inspire the creation of new images.

Results

A sample of the images created by the artists is projected in the gallery; these images are updated every month as the artists explore new concepts and as the AI image-generation algorithm evolves. Additionally, to keep the didactic nature of the show, this piece included the images and the prompts used to generate the work.

Tools

To create this piece, the group used Midjourney, an AI program that creates images from textual descriptions. Even though there are other tools that serve this purpose, at the time of preparing the work for the exhibition, Midjourney was in open beta, the chat-based interface was easy to learn and use, and the quality of the images produced by this tool was superior when compared to other free tools. We acknowledge that there are other tools as powerful or even more powerful than Midjourney, such as DALL-E or Stable Diffusion, which we would like to explore in the future.

Midjourney's chat-based interface

Midjourney website showing a selection of user-generated images

What's Next?

Midjourney and other Artificial Intelligence text-to-image tools are changing how we conceive of art and the art-making process. This will be an unfolding conversation in the fields of art, design, and curatorial studies in the coming years. For our group, this has been an opportunity to experiment with the software and see what we can create with it. This may become a tool for future ideation and concept exploration - a way of having a group brainstorm with an AI bot.