In this project, we were asked to create a "mood pillow" product that has at least three different "moods" (positions/sides/faces, etc) and then train our Teachable Machine to recognize these moods and display the current mood on our Arduino OLED display. Later, we were told to create a 2 minute video showcasing how it works as an advertisement for your product.
We decided to create a Mood Cube that displays 3 emotion in emoji faces: Happy, Sad and Angry. The users are able to orient the box according to how they feel so that other people will understand. It serves as a tool for communicating and conveying human emotions to the outside world, and is especially useful for people who have a trouble doing that. We also added LED lights in the place of the circular eyes of each emoji face. The lights turn on depending on the displayed face, in different colors: Red lights for the angry emoji face, blue lights for the sad and green lights for the happy one.
We created a rectangular box on Onshape that would fit the Arduino board inside, with an open top. After sketching 3 emoji faces for angry, sad and happy on each side, we used the Extrude command to remove the solid to get holes. For the fourth side, we further added a rectangular hole to fit the cable coming out of the Arduino board that would connect to the computer.
Later on, we added a lid at the top that read "Mood Cube" and includes a rectangular hole that would fit the display. Due to 3-D printing queue and NOLOP working hours, we could get it on the last day.
After 3D-printing our design, we placed the Arduino board and 6 LED lights in different pins. Pairing the LEDs with one another, we assigned each pair to one face and further taped them in the place of the eye holes. Later we adjusted the colors of the LED pairs accordingly using the screwdriver. We placed the screen at the side, at the place of the rectangular hole.
We created 3 classes for 3 different emotions: Happy, Sad and Angry. Using the webcam of the computer, we recorded image samples for each face, orienting the cube in different angles when introducing each face. At first, we got about 30 image samples for each face, however when we trained the machine and tested different faces on camera in Preview, we encountered some errors, like Sad face being detected as a Happy face. Then, we added more image samples, zooming out and in, and again, from many different angles to "teach better". When we tested again, the accuracy was slightly higher but still there were minor errors. We realized that the Angry emotion always showed higher accuracy when detecting because of its distinct expression with the brows, while Happy and Sad often did not have 90-100% accuracy for the output. Adding more image samples, we also increased the degree of "Epochs" this time since it says the larger the number the better the model will learn to predict data.
After exporting our teachable machine model to a Keras folder with TensorFlow to further work in Python. Previously, we installed Python and edited the code for the camera to be open through IDLE ( Python's Integrated Development and Learning Environment).
Also working on Terminal, we were able to put in necessary commands that link the camera.py file( which is connected to National Instruments and LabVIEW) to the camera on Mac
When the front camera opened, we took 3 pictures of the 3 faces of the box, changing directions. Going to the TeachableMachines VI on LabVIEW, we introduced our converted_keras file and uploaded our pictures one by one. Regarding SystemLink, we created a String type tag called "emotion" and introduced it for SystemLink Control on the front panel. By this way, as the code detects the image through the Teachable Machine model, the result in higher percentage goes into SystemLink function and the function eventually writes it in a String format, which becomes the tag value. Each time we introduce a new picture, the tag value switches depending on the detected expression, Angry, Sad or Happy.
The image is detected to be the "Angry" face as we specified our class and trained our Teachable Machine. The SystemLink tag value becomes "Angry".
The image is detected to be the "Happy" face as we specified our class and trained our Teachable Machine. The SystemLink tag value becomes "Happy".
The image is detected to be the "Sad" face as we specified our class and trained our Teachable Machine. The SystemLink tag value becomes "Sad".
We utilized 6 LEDs which we plugged into different digital pins on the Arduino board. We introduced the digital pins as integer values at the beginning of the code, and then set the LEDs to LOW, meaning no light, at the initial state so that the specific LEDs will further light up depending on the mood of the user.
We also connected the OLED display and set up the relevant function. Then, we created a String named mood to receive the tag value of the tag "emotion" from SystemLink, using the GET_SystemLink function. We set the coordinates of the tag value to display the expression of the user once it detects on LabVIEW. Also we included the fixed statement "I'm feeling:" in a string format 2 lines before so that the user will be able to send a more clear messge.
Then, we created if-else statements to meet 3 conditions in relation with 3 different expressions. When the tag value received from SystemLink is equal to one expression, the LEDs associated with that expression on the mood cube will light up. One of the issues was not writing the expressions for tag values in quotation marks at first, so it did not work. Then we were able to make the code work by adding the quotation marks. We identified each LED light pairs in relevant digital pins so that they would turn on at the same time as they belong to the eyes of the emoji faces. We further had to include other LEDs that belong to undisplayed expressions and set them to HIGH, meaning the lights should be off. By this way, when we switch emoji faces, as the tag value changes, the LED lights belonging to the previous expression would turn off and only the relevant LEDs will turn on.
We used the platform CuteCut for merging the videos and editing them. We made a script based on personal experiences and recorded customer testimonials of our friends while using the product. We recorded the display of each expression We also added a humor component to the product video regarding the stories associated with emotions.