In this study, we implemented the facial affect detection algorithms with various datasets and conducted a comparative analysis of performance across the algorithms.
The algorithms implemented in the study included a Convolutional Neural Network (CNN) in Tensorflow, FaceNet using Transfer Learning, and Capsule Network.
Each of these algorithms was trained using the three datasets (FER2013, CK+, and Ohio) to get the predicted results. The Capsule Network showed the best detection accuracy (99.3%) with the CK+ dataset.
In this, we implemented an augmented reality-based Android/iOS application for detecting parts of a Solar PV inverter.
The algorithms implemented in the study included a Convolutional Neural Network (CNN) in Tensorflow and Darknet.
This is a classification method based on dialog acts to facilitate subsequent application of NLP techniques.
The classification methods are based on a convolutional neural network (CNN) and long short term memory (LSTM) to classify the questions and answers in to their respective dialog acts.
Experimentation showed we could achieve an accuracy of 84% on dialog act classification involving 20 classes, using pre-trained BERT embeddings.
Had applied Pointer Generator Network to get extractive text summarization from 10k ETD’s (Electronic Thesis & Dissertation).
We had applied various text summarization techniques which included sequence2sequence, pointer generator, and hybrid summarization.
In addition, GROBID and ScienceParse techniques were used for data preprocessing and convert PDF to TEI encoded format.
The extractor and abstractor network in hybrid ensures faster and non-repetitive text summaries.
The team ingested the tobacco documents to build the archive for the information retrieval and analysis system.
Handled over 14 million tobacco settlement documents in suitable formats for the needs of ElasticSearch (ELS) and the Text Analytics and Machine Learning (TML) teams.
Processed both the metadata and the text in suitable formats. Retrieved the metadata from a MySQL database and converted it into a JSON for Elasticsearch ingestion. For the data, did lemmatization, tokenization, and text cleaning.