Smile Project

Deep Immersive Art with Realtime Human-AI Interaction

Welcome to Smile Project - Smile and Art will Smile Back!

We have the great pleasure to welcome you in our temple of smiles, where you will discover a new kind of painting and art performance. It is a world premiere - the first intelligent painting that recognizes the human smiles, artistic expressions and body movements and responds back, in the language of light and color.

Human and art meet each other with a smile! We have created a unique ensemble, joining man, art and technology, which establishes a cycle of creation, recognition and communication that creates a bridge between the human spirit and intelligence beyond. Smile Project uses deep convolutional networks and state-of-the-art computer vision technology to respond in realtime through light and color to the smiles, emotional expressions and movements of the human viewer, who becomes in this way part of the creative process together with the system itself. Art is seen, recognized and responded to. Then it becomes the artistic mirror of the human, who is always in search of self in the infinite!

What is Smile Project

We propose a new system at the intersection of art, technology and artificial intelligence that responds to the user's smile, body poses and movements. Smile Project changes the way a user experiences art, through immersive human-AI interaction.


Smile Project proposes an original meeting between humans, art and artificial intelligence. It consists of a novel system, an ensemble of five paintings combined with fiber optics and the latest mobile technology, computer vision and deep learning techniques, in order to achieve an artistic human-AI interaction in realtime. The Smile Project ensemble recognizes the human face and smile as well as her body poses and movements and responds back through lights and colors. Thus, a meta-language is created between the human and the AI-art, through which the system mirrors at a semantic level, by color and light, the human artistic expression. The user is thus engaged in the act of creation, such that to each of her next actions an artistic response is received from the AI ensemble of paintings. By focusing on the human smile and generally positive body movements (e.g. opening of the arms), the system encourages the user to express her positive emotions, with an observed therapeutic effect. We believe our concept has the potential to open new doors in the universe of human-computer interaction, in which a more vivid, powerful and positive artistic and aesthetic experience becomes possible.

How it is done

Art is mostly regarded as being static. Advances in both deep learning and hardware allow us to develop new ways of interaction between humans and art, by means of intelligent machines. In this work, humans interact with an AI-art system that is capable of detecting and using the viewer's smile and body pose in order to respond back, through multiple colored LED lights, which can vary in intensity and are placed on the painting's canvas.

The proposed Smile Project uses latest computer vision and deep learning techniques for face detection, smile recognition and detection of body pose. These tasks are solved using state of the art deep convolutional networks and a novel algorithm combines their output in order to decide the appropriate response through colors and lights. The computer vision and deep learning algorithms run on a smartphone (Galaxy S10) with a relatively powerful GPU, using OpenCV and TensorFlow Lite libraries. The system is composed of five paintings and the smartphone is hidden behind the central one, with the camera being able to "see" the user. All the processing is done on this central smartphone, which controls 5 WiFi-enabled controllers, one for each painting. The controllers then dictate the intensity of light and color of their corresponding glow fiber optics, which have different combinations of colored LED sources on both ends. All processing and the final system response runs in realtime at 18 fps. The Smile Project is the main theme of the Diploma thesis of the visual artist, Cristina Lazar at the National University of Arts in Bucharest. The engineering and programming part was done by AI researcher and engineer Nicolae Rosia, who graduated from the Military Technical Academy of Bucharest. The project was conceived and coordinated by Marius Leordeanu, Associate Professor at Politehnica University of Bucharest and Senior Researcher at the Institute of Mathematics of the Romanian Academy.

The Art of Smile Project

The art of Smile Project is to express aesthetically in unprecedented and unpredictable ways the nature of a Smile.

From the visual discourse point of view , Smile Project paintings are positioned in the direction of abstract art, with focus on abstract expressionism. Cristina Lazar, the visual artist of Smile Project, was inspired by the second period of abstract expressionism, namely the color field period and artists such as Yves Klein, Gerhard Richter, Mark Rothko, Franz Kline, Willem de Kooning and Mary Weatherford. The modern sculptor Constantin Brancusi was another important source of inspiration. Cristina used a number of classic abstract painting techniques, including action painting. She used several unconventional tools such as the ones used in the field of building construction. The main idea is to create original ways of expression that could represent in an aesthetical manner the deep essence and nature of a Smile.

The paintings combine in complementary ways the foreground representation of a smile, through abstract geometric figures realized with different techniques (using a brush, stencil, optical fiber or directly by hand) and mediums (spray, oil, acrylic, flashe, charcoal or light), with the background colors and shapes that fill the "space" of expression. Each work creates a new world in which a different abstract entity, with a different personality, is brought to life and "smiles" in response to the viewer's own smile, pose and movement.

In the history of abstract art, it is important to consider the period when the expressionist art was born, the 50s, when artists came up with an artistic response to the times they lived. They moved art into a direction meant to explore other spaces of visual language, going towards the essence of abstract art. Thus, art went beyond form. The value of an expressionist work became the way that work made one feel. In abstract expressionism each artistic piece requires its time for the viewer to enter its visual universe. That universe is a novel "space" created through colors and shapes.

Smile Project brings a new artistic context that puts art and technology together and gives birth to new spaces and emotions. Artificial intelligence becomes itself "empathic" and responds to the user's own expressions and emotions. In the heart of our contemporary world Smile Project proposes as a way to extend art by establishing a deeper relationship between the viewer and the artwork. It involves the viewer in the process of creation by means of an intelligent machine that recognizes the viewer's expressions and then responds back appropriately.

For creating the Smile Project paintings, Cristina Lazar used different techniques that are particular to abstract expressionism such as: throwing the color directly from the containers then adding other layers by using different unconventional tools (e.g. big rotating brushes used in building construction or her own body, fingers and hands) and through less common body and hand movements (e.g. dancing). The entire artistic process was meant to express the idea of a smile, from the colors used, the artistic style and the way in which the paintings were created. The painting process continued even after the optical fibers were added and the AI system was turned on, in order to create a complete artistic and novel aesthetic object, which has all its elements together, in an harmonious way. The ensemble was conceived to have two aesthetic identities: one during the day when the optical fibers lights are less visible and another, during the night, when the participant can experience mostly the drawing created in realtime through the light response of the neural networks.

System Architecture

The system consists of five paintings, each having an optical fiber connected to different combinations of colored LED sources at both ends. Each light source is driven by a WiFi-enabled controller that, in turn, is controlled by the software running on a central smartphone.

The center painting has a small hole in the canvas where the smartphone's rear camera is pointed towards the center of the scene. The smartphone observes the user's face, facial expression and body pose using three specialized deep convolutional networks. Their output is then analysed by a novel computer vision algorithm, which ultimately dictates, through the WiFi controllers, the optical fiber light responses for each painting.

Processing Pipeline

The image processing pipeline starts by running three deep convolutional networks as shown above, followed by the lights controller algorithm.

Two convolutional nets are run on the smartphone CPU one after the other: the first one is doing face detection [1], followed by a second network that performs smile recognition [2]. A third deep net is run on the GPU and it is trained to detect body joints [3, 4]. Their outputs are then fed into a novel computer vision algorithm that controls the lights for each painting. The system is developed in C++, uses TensorFlow Lite [5] and it is executed in almost realtime (~18 fps) on an Android-based Samsung Galaxy S10 .

Interaction Algorithm

We have created a novel computer vision algorithm for automatically controlling the lights of the optical fiber for each painting, in response to the user facial expression, body pose and movement.

The intensity of the center painting is influenced mainly by the intensity of the user's smile and by how much her arms are raised (including hands, elbows and shoulders). The near center paintings are activated by a slight raise of arms, whereas the paintings at the extreme left and right are activated by a big raise in arms. The paintings on the left respond mainly to the left body parts, while the ones on the right to the right body parts, respectively.

From an algorithmic point of view, we represent the human body with a graph whose nodes contain local visual features extracted at the joints and edges represent the pairwise geometric relationships between the joints (positions and angles) and their changes in time (using temporal derivatives). We then regress the light response of each optical fiber on these spatial pairwise relations and their temporal changes.

Thus, the viewer can interact directly with the paintings in realtime by using her or his facial expression and body language (e.g. through smiling, dancing).

What happens next

We are currently testing new and more sophisticated patterns of colors and lights, in response to different body movements and other kinds of facial expressions. We are also exploring ideas of self-supervised and reinforcement learning in order to create a system that improves its own performance over time in response to the viewer's activity for a better immersive artistic experience.

References

[1] OpenCV - Open Source Computer Vision Library, https://opencv.org/

[2] Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor, “AffectNet: A New Database for Facial Expression, Valence, and Arousal Computation in the Wild”, IEEE Transactions on Affective Computing, 2017.

[3] Papandreou, G., Zhu, T., Chen, L.C., Gidaris, S., Tompson, J. and Murphy, K. "Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model", European Conference on Computer Vision (ECCV), 2018.

[4] Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C. and Murphy, K. "Towards accurate multi-person pose estimation in the wild". IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

[5] TensorFlow Lite, https://www.tensorflow.org/lite

Who we are

Cristina Lazar

Visual Artist

National University of Arts - Bucharest

Creator of Smile Project Paintings

Email: cristina9lazar@gmail.com

Nicolae Rosia

AI Engineer

Military Technical Academy of Bucharest

Smile Project Engineering Design

Implementation and Programming

Email: nicolae.rosia@gmail.com

Petru Lucaci

Professor and Visual Artist

National University of Arts- Bucharest

President of the

"Artists Union of Romania"

Artistic and Diploma Coordinator for Smile Project

Emal: petru_lucaci@yahoo.com

Marius Leordeanu

Associate Professor

University Politehnica of Bucharest

Creator of Smile Project Concept, Artistic and Scientific Coordinator

Email: leordeanu@gmail.com

Cristina Lazar's Diploma coordination: Professor Dr. Petru Lucaci http://www.lucaci.ro/

Painting stands: Architect Samuel Bumbu https://samibumbu.com/