- We are building a sequential model with embedding that uses 50 length vectors to represent each word using the GloVe embedding matrix as weights. The next layer is the LSTM (an artificial recurrent neural network (RNN) architecture) layer with 16 memory units (smart neurons).
- The Model is compiled with OHPL (Ordinal Hyperplane Loss) function. OHPL enables deep learning techniques to be applied to the ordinal classification problem. By minimizing OHPL, a deep neural network learns to map data to an optimal space where the distance between points and their class centroids are minimized. As it minimizes the loss, it is able to maintain nontrivial ordinal relationship among classes. Existing work in the industry is limited to perform binary classification only.
- The model is trained in batches of 15 over 50 epochs.
- Once trained the model is evaluated and scored against training data and prediction as made against test data.
- As you can see the loss is reducing with each epoch. By the end of 50th epoch loss reduced from 4.0 to .52
- Once trained the model is evaluated and scored against training data, prediction as made against test data.
- Mean Absolute Error is calculated to evaluate performance of the model
- Confusion Matrix is then created to show the performance of classification matrix