Publications

OrchId

Google Scholar

Signal Processing


IEEE Transactions on Consumer Electronics

Real-Time Speech Emotion Analysis for Smart Home Assistants

R. Chatterjee, S. Mazumdar, R. S. Sherratt, R. Halder, T. Maitra and D. Giri, "Real-Time Speech Emotion Analysis for Smart Home Assistants," in IEEE Transactions on Consumer Electronics, vol. 67, no. 1, pp. 68-76, Feb. 2021, doi: 10.1109/TCE.2021.3056421.

Abstract: Artificial Intelligence (AI) based Speech Emotion Recognition (SER) has been widely used in the consumer field for control of smart home personal assistants, with many such devices on the market. However, with the increase in computational power, connectivity, and the need to enable people to live in the home for longer though the use of technology, then smart home assistants that could detect human emotion will improve the communication between a user and the assistant enabling the assistant of offer more productive feedback. Thus, the aim of this work is to analyze emotional states in speech and propose a suitable method considering performance verses complexity for deployment in Consumer Electronics home products, and to present a practical live demonstration of the research. In this article, a comprehensive approach has been introduced for the human speech-based emotion analysis. The 1-D convolutional neural network (CNN) has been implemented to learn and classify the emotions associated with human speech. The paper has been implemented on the standard datasets (emotion classification) Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) and Toronto Emotional Speech Set database (TESS) (Young and Old). The proposed approach gives 90.48%, 95.79% and 94.47% classification accuracies in the aforementioned datasets. We conclude that the 1-D CNN classification models used in speaker-independent experiments are highly effective in the automatic prediction of emotion and are ideal for deployment in smart home assistants to detect emotion.

International Conference on Advances on Data Computing, Communication and Security (I3CS-2021)

Deep Learning Approach for Motor-imagery Brain States Discrimination Problem

S. Mazumdar, R. Chatterjee

Abstract: Brain signals can be used to control robotic limbs for partial or fully paralyzed persons. Electroencephalography (EEG) is a widely used non-invasive brain signal recording technique. It is essential to process and understand the hidden patterns associated with a specific cognitive or motor task. Here, the focus is on motor-imagery (MI) EEG signal classification.  There is a significant difference between machine learning and deep learning algorithms at the feature extraction phase. In this paper, a one-dimensional (1D) Convolutional Neural Network (CNN) has been proposed to interpret motor-imagery left-hand and right-hand movements.  The proposed model has been compared with the existing SOTA techniques on the same BCI Competition II Dataset III. It outperforms the traditional machine learning models and achieves 91.43% classification accuracy.


URL: https://link.springer.com/book/10.1007/978-981-16-8403-6 

Scan for an overview