The Deep Learning model applied to the raw time series data is LSTM (Long Short-Term Memory)
Recurrent neural networks of the Long Short-Term Memory (LSTM) type can learn order dependence in sequence prediction issues. Aside from singular data points like photos, LSTM has backpropagation, making it capable of analyzing the complete sequence of data. This has uses in machine translation and speech recognition, among others. A unique version of RNN called LSTM exhibits exceptional performance on a wide range of issues.
When discussing the LSTM model, we will use simple RAW data (in the ML model, we are using single orchestrated data created by an expert), but when we look at the outcome without any FE data, LSTM operates very well and got the highest 91% precision with 2 layer LSTM with optimisation. Additionally, as we increase the number of LSTM layers and the number of tuning parameters, the cross-entropy value decreases and accuracy increases.
********************************* Deep Learning LSTM Model Comparision ***********************************
+----------------------------------------------------------------------------------------------------------+
| Model Comparision |
+-------------------------------------------+------------------------+--------------------------+----------+
| Model Name | Hyperparameter Tunning | categorical_crossentropy | Accuracy |
+-------------------------------------------+------------------------+--------------------------+----------+
| LSTM With 1_Layer(neurons:32) | Done | 0.47 | 0.90 |
| LSTM With 2_Layer(neurons:48, neurons:32) | Done | 0.39 | 0.90 |
| LSTM With 2_Layer(neurons:64, neurons:48) | Done | 0.27 | 0.91 |
+-------------------------------------------+------------------------+--------------------------+----------+
We applied the LSTMs as follows:
a. 1 layer of LSTM
b. 2 layers of LSTM with more hypertuning of parameters