Artificial Intelligence & Art

Students: Katerina Papadopoulou , Alkisti Porioti

Class: Α3 - Grade: A- School: 1st Arsakeio Senior High School of Psychico

Course: IT Applications

School year 2022-2023


Introduction

New technologies, and in particular artificial intelligence, are drastically changing the nature of creative processes. Computers are playing very significant roles in creative activities such as music, architecture, fine arts, and science. Indeed, the computer is already a canvas, a brush, a musical instrument, and so on. However, we believe that we must aim at more ambitious relations between computers and creativity. Rather than just seeing the computer as a tool to help human creators, we could see it as a creative entity in its own right. This view has triggered a new subfield of Artificial Intelligence called Computational Creativity. This article addresses the question of the possibility of achieving computational creativity through some examples of computer programs capable of replicating aspects of creative artistic behavior. We end with some reflections on the recent trend of democratization of creativity by means of assisting and augmenting human creativity. 


When artificial intelligence plays music.

Artificial intelligence has played a crucial role in the history of computer music almost since its beginnings in the 1950s. However, until quite recently, most effort had been on compositional and improvisational systems and little effort had been devoted to expressive performance. In this section we review a selection of some significant achievements in AI approaches to music composition, music performance, and improvisation, with an emphasis on the performance of expressive music.

Composing Music

Hiller and Isaacson’s (1958) work, on the ILLIAC computer, is the best-known pioneering work in computer music. Their chief result is the Illiac Suite, a string quartet composed following the “generate and test” problem-solving approach. The program generated notes pseudo-randomly by means of Markov chains. The generated notes were next tested by means of heuristic compositional rules of classical harmony and counterpoint. Only the notes satisfying the rules were kept. If none of the generated notes satisfied the rules, a simple backtracking procedure was used to erase the entire composition up to that point, and a new cycle was started again. The goals of Hiller and Isaacson excluded anything related to expressiveness and emotional content. In an interview (Schwanauer and Levitt, 1993, p. 21), Hiller and Isaacson said that, before addressing the expressiveness issue, simpler problems needed to be handled first. After this seminal work, many other researchers based their computer compositions on Markov probability transitions but also with rather limited success judging from the standpoint of melodic quality. Indeed, methods relying too heavily on Markovian processes are not informed enough to produce high-quality music 

However, not all the early work on composition relies on probabilistic approaches. A good example is the work of Moorer (1972) on tonal melody generation. Moorer’s program generated simple melodies, along with the underlying harmonic progressions, with simple internal repetition patterns of notes. This approach relies on simulating human composition processes using heuristic techniques rather than on Markovian probability chains. Levitt (1993) also avoided the use of probabilities in the composition process. He argues that “randomness tends to obscure rather than reveal the musical constraints needed to represent simple musical structures.” His work is based on constraint-based descriptions of musical styles. He developed a description language that allows expressing musically meaningful transformations of inputs, such as chord progressions and melodic lines, through a series of constraint relationships that he calls “style templates.” He applied this approach to describe a traditional jazz walking bass player simulation as well as a two-handed ragtime piano simulation.

The early systems by Hiller-Isaacson and Moorer were both based also on heuristic approaches. The generation of the melody and the harmony were based on rules describing how notes or chords may be put together. The most interesting AI component of this system are the applicability rules, determining the applicability of the melody and chords generation rules, and the weighting rules indicating the likelihood of application of an applicable rule by means of a weight. We can already appreciate the use of meta-knowledge in this early work. 





Image 1: Hiller and Isaacson’s (1958) work, on the ILLIAC computer

Image 2: Mozart on artificial art 

Imagery



Image 3: An image generated by DALL-E 2 based on the text prompt "1960's art of cow getting abducted by UFO in midwest" 

Image 4: An image generated by DALL-E 2 based on the text prompt "1960's art of cow getting abducted by UFO in midwest" 

Many mechanisms for creating AI art have been developed, including procedural "rule-based" generation of images using mathematical patterns, algorithms which simulate brush strokes and other painted effects, and artificial intelligence or deep learning algorithms.One of the first significant AI art systems is AARON, developed by Harold Cohen beginning in the late 1960s at the University of California at San Diego.AARON is the most notable example of AI art in the era of GOFAI programming because of its use of a symbolic rule-based approach to generate technical images.Cohen developed AARON with the goal of being able to code the act of drawing. In its primitive form, AARON created simple black and white drawings. Cohen would later finish the drawings by painting them. Throughout the years, he also began to develop a way for AARON to also paint. Cohen designed AARON to paint using special brushes and dyes that were chosen by the program itself without mediation from Cohen.Generative adversarial networks (GANs) were designed in 2014. This system uses a "generator" to create new images and a "discriminator" to decide which created images are considered successful. More recent models use Vector Quantized Generative Adversarial Network and Contrastive Language–Image Pre-training (VQGAN+CLIP).

Programs

Several programs use text-to-image models to generate a variety of images based on various text prompts. They include OpenAI's DALL-E, which released a series of images in January 2021, Google Brain's Imagen and Parti which was announced in May 2022, Microsoft's NUWA-Infinity,  and Dream by Wombo. The input can also include images and keywords and configurable parameters, such as artistic style, which is often used via keyphrases like "in the style of [name of an artist]" in the prompt and/or selection of a broad aesthetic/art style. 

Image 5;:An image generated with DALL-E 2 

Concerns about impact on artists


Image 6 :Lillian Schwartz's Comparison of Leonardo's self-portrait and the Mona Lisa is based on Schwartz's Mona Leo. An example of a collage of digitally manipulated photographs 

Some artists in 2022 raised concerns about the impact AI image generators could have on their ability to earn money, particularly if AI images are used to replace artists working in illustration and design. In August 2022, a text-to-image AI illustration won the first-place $300 prize in a digital art competition at the Colorado State Fair. Digital artist R. J. Palmer said in August 2022 that "I could easily envision a scenario where using AI a single artist or art director could take the place of 5-10 entry level artists... I have seen a lot of self-published authors and such say how great it will be that they don’t have to hire an artist," adding that "doing that kind of work for small creators is how a lot of us got our start as professional artists." Polish digital artist Greg Rutkowski said in September 2022 that "it's starting to look like a threat to our careers," adding that it has gotten more difficult to search for his work online because many of the images returned by search engines are generated by AI that was prompted to mimic his style.

An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.


This year, the Colorado State Fair’s annual art competition gave out prizes in all the usual categories: painting, quilting, sculpture.

But one entrant, Jason M. Allen of Pueblo West, Colo., didn’t make his entry with a brush or a lump of clay. He created it with Midjourney, an artificial intelligence program that turns lines of text into hyper-realistic graphics.

Mr. Allen’s work, “Théâtre D’opéra Spatial,” took home the blue ribbon in the fair’s contest for emerging digital artists — making it one of the first A.I.-generated pieces to win such a prize, and setting off a fierce backlash from artists who accused him of, essentially, cheating.

Reached by phone on Wednesday, Mr. Allen defended his work. He said that he had made clear that his work — which was submitted under the name “Jason M. Allen via Midjourney” — was created using A.I., and that he hadn’t deceived anyone about its origins.

“I’m not going to apologize for it,” he said. “I won, and I didn’t break any rules.” 

Image 7 : Picture won 1st place-created by AI

Sources