Emotion Recognition

Recognize Emotion in Children's Faces

Before this experiment, the lab had evidence that an interactive e-book would be almost guaranteed to help 4 to 5-year-olds focus and have better story comprehension from this paper. The idea was based on the ICAP Theory of Cognitive Engagement. That project focused on making technology to help children learn to read, so the subjects were all pre-readers. In order to study how a child focuses, we hypothesized that a child's emotions would reflect what is occurring in the story. I was tasked with coding a program that would be able to guess the emotion on a child's face.

I caught covid around this time: July 9 – July 19

Finding Emotion Recognition Program

The lab had access to a Microsoft emotion recognition system called Azure before my arrival. The system was completely removed from use due to privacy concerns the month before I picked it up. The website was fully accessible, and we still owned the rights to use the system, but it just didn't work. No one realized it was non-functional, so I spent a good amount of time trying to get the lab's old code to work. By the time I realized it was a dead end, it was July 27th. We then started looking into alternatives. I found a Python library called fer (Facial Emotion Recognition) embedded below:

FER

I wrote the code to run the library and execute the recognition on a folder of images. The images were sourced from both stock images and this paper. We used the paper as a source of images of children, and the stock images for adults.

I figured out how to run the code on August 4th, compiled the database of images by August 12, and wrote the code to run on both the adult and child databases by August 23. The results from the run are shown below.

The library returns a dictionary of elements, with the emotion and then the confidence value of its accuracy. the confidence value is calculated separately for each emotion, so they don't have to add to 100%. Very rarely, a value of 1 is given for 100% confidence.

FER is not always accurate, just like any other algorithm. For example, this image, labeled as "disgust", was determined to be "angry" with 42% condfidence.

Of course, the algorithm gets confused if the image displays an emotion that is not "angry", "disgust", "fear", "happy", "neutral", "sad", or "surprise". For example, this image displays "suspicious". In this instance, it ran twice and determined "happy" and then "neutral".

The original intention of running this program — studying the relationship between emotion and focus in children — didn't go anywhere. It turned out the database we had on the e-book reading was during covid, when all of the participants wore masks, making it impossible to do any emotion recognition whatsoever. I asked the lab to recreate the experiment with the recording coming from the tablet and focused on the child's face. Data collection on children is difficult in general, and the lab won't be able to get the data needed for my code for a while.