Implementation of a responsive environments using biometric sensing and machine learning (On an Emotionally Intelligent Responsive VR Environment):
We spend much of our time indoors. However, our environments are usually static— therefore, they do not have the capability to identify our changing emotional state, and to adapt themselves in response. They are not “smart” enough to recognize when we are in distress, nor capable enough to soothe and comfort us. These gaps in ability could possibly harm our mental health in the long-term. Despite the promising outcome from previous research efforts in obtaining physiological data from visual stimuli, there is still a fundamental gap between obtaining this physiological data, and applying it practically, by using it to predict the emotions of the user. Furthermore, there is also a lack of research in the area of using emotion detection and recognition of users to modify the indoor environment accordingly. This project aims to rectify both of these gaps by first using electroencephalography (EEG) signals to recognize and identify emotions through a machine learning model, then use the predictions to modify architectural design features in immersive VR environments to direct the user into a calmer, more relaxed state of mind if they are feeling stressed, anxious or depressed. We believe this could be used as a framework for future research into the effects of architectural features on emotions.
Our approach is three-fold: one, to create a machine learning model that can accurately predict valence (roughly the “positivity” of an emotion— happiness is positive, so it will rate highly in valence, but sadness is negative, so it will rate low in valence) based on EEG data streaming in; two, to create a localhost server connection where the model, as the client, can send its predictions to the server; and three, a VR environment that can take the predictions from the server and use it in adjusting its lighting.