Since the last project, it has left me thinking. How far can we push the machine to produce provocative or controversial images? For my midterm, I asked it to create a human of "itself" or imagine a person's image. I noted that certain words would inspire the machine, linger in our conversion, and created based on our discussion. Our discussion influences its output. So now, I am using words that target me. For example, words that either encourage me or stereotype me.
DALL-E
Thug
Smart
Stupid
Artist
Immigrant
DALL- E was launched on January 5, 2021, and generated two image outputs. It is becoming harder to look for any biased behavior. It immediately flags you when it thinks it is disrespectful and suggests a lighter prompt, but there are still some hints of basis; for example, it favors the male over the female gender. It also situates the lighter person in a less scary environment. You can see that happening with the image of a thug. I also noticed that the same bearded white man continues to be generated, especially when I am not asking or describing the individual. Yet, it is the same man from the previous project. This conversation started as a fresh conversation, so there is nothing that could have influenced it.
MIdjourney
Thug
Smart
Stupid
Artist
Immigrant
Uneducated
Cholo
MIdjourney was launched on July 12, 2022, and generates four image outputs. It is less censored and a bit broader in its output, although it connects certain words with specific groups of people. When I asked to generate a thug, one of the four images was offensive. Although the image generates fewer products with fewer restrictions, it does not seem to know where to draw the line between animals and humans. It makes me question if the outcome is the result of a random output or the work of the people involved in the construction of the machine.
Meta AI
Thug
Smart
Cholo
Artist
Uneducated
Meta AI image generator was released in 2024 and generates a single image output. This AI is really interesting; it flags when the prompt is disrespectful. The outputs were all brown folks for all the prompts I input. It favors the user and may use their data to create images they want to see. On this occasion, these images were produced under my Instagram account. As a person of color, I can only assume it is using my data to create these images.
We can see that the machine is limited by restrictions. I ask myself, limited to what? What is it hiding? But with little restriction, it shows us a dark mirror—a mirror that reflects the dark side of humanity. It comes to show that the behavior the machine reflects is basically us. To fix the issue, I think we must first fix the image of humanity.
Conclusion