By running the proposed algorithm on the benchmark DEAP dataset, I aim to develop a deep learning architecture with reliable accuracy in classifying emotions (based on the valence-arousal model) from electroencephalographic signals.
After conducting Literature Review, the tentative proposed model is Hybrid Deep Neural Network (CNN + LSTM-RNN) with transfer learning.
Once trialed-and-tested, this architecture will then be used to solve real-life clinical problem, specifically described in Part 2.
Some children suffer from an inability to express their emotions due to speech loss or other neurological disorders. In order to assist their recovery process and communication with others, it is necessary to recognize their emotions through objective physiological measures.
During this part, I will work with the clinicians at Singapore KKH Hospital to collect data from the target population i.e. children with speech deficiencies. The challenge is that such data is very limited compared to the amount required for deep learning. Therefore, domain transfer is likely to be very useful in adapting the knowledge learned from public dataset whose target population are adults.