Project: Unsupervised — Machine Hallucinations by Refik Anadol (Displayed at MoMA)
1. What type of machine learning models did the creator use?
The creator used generative adversarial networks (GANs) to drive the Machine Hallucinations and Unsupervised projects. He employed DCGAN (Deep Convolutional GAN), PGAN (Progressive GAN), and StyleGAN2 ADA. These models are designed to generate new images by learning the patterns and distributions within large datasets. GANs consist of a generator that creates images and a discriminator that evaluates their realism, allowing the system to iteratively improve and produce highly detailed, novel visual content.
2. What data might have been used to train the machine learning model?
The models were trained on a very diverse dataset derived primarily from the Museum of Modern Art (MoMA) digital archive, which includes over 138,000 items spanning paintings, photography, video, and digital games. In addition, publicly available digital art resources were incorporated to expand the dataset. The data was processed to extract metadata, organize it into clusters or thematic categories, and map each item into a high-dimensional latent space, enabling the AI to learn both visual and semantic patterns across more than 200 years of artistic production.
3. Why did the creator of the project choose to use this machine learning model?
Anadol chose GAN-based models because they are uniquely suited to generative tasks that require creativity and exploration of new possibilities. These models can autonomously discover underlying patterns in the data without the need for manual labeling, aligning with the project’s goal of machine-driven “hallucinations” of art. Furthermore, it provide a way to explore high-dimensional latent spaces, allowing the artist to reinterpret historical collections and generate novel compositions. The use of these models also resonates with artistic concepts such as automatism and chance, bridging technical innovation with creative expression.
Week4 project: Swirlscape
This project was developed as an exploration of combining particle systems with real-time hand tracking using the ml5.js handPose model. I began by designing a basic particle system, focusing on behaviors such as attraction, repulsion, and friction. Once the core system was stable, I integrated handPose so that the user’s index finger could act as a gravitational point, pulling particles toward it. The next step was to experiment with gesture-based states: a “pinch” gesture became the trigger for storing energy and creating a swirling motion around the finger, while the release of the pinch produced an explosive burst of particles.
During development, I encountered several challenges. At first, the fist detection was too sensitive that small finger movements were misread as pinches, triggering unwanted explosions. To address this, I added smoothing logic that stabilized the gesture recognition. I also found the explosion effect either too weak or too chaotic, which required iterative tuning of particle mass, force, and swirl randomness. Some early versions even froze because too many particles were pushed outward at once. Through debugging and constant adjustments, I gradually achieved a balance between smooth interaction and dynamic visual expressiveness.
Ultimately, the project became not only a technical exercise in force-based particle systems, but also an experiment in how simple human gestures can translate into playful, almost poetic visual events:)
https://editor.p5js.org/VivianaviVi/sketches/Tf_mRBojJ