With RNN
With RNN
Dataset of 38 reviews
good review is labelled 0
bad review is labelled 1
plt.figure(figsize=(10, 10))
for word, i in word_index.items():
# Plot the embedding vector for the word
plt.scatter(embedding_vectors_2d[i, 0], embedding_vectors_2d[i, 1])
# Annotate the point with the word
plt.annotate(word, (embedding_vectors_2d[i, 0], embedding_vectors_2d[i, 1]))
Visualizing relationships between words based on their learned embeddings. Words that are semantically similar are clustered closer together on the scatter plot.
Negative words cluster to the right
Positive words cluster to the left
Neutral words cluster upwards and downwards
import pandas as pd
emotion=['positive', 'negative']
pred_emotion=[emotion[mypred([i])] for i in sentences]
actual=[emotion[i] for i in training_labels]
pred_emotion
accuracy = pd.DataFrame({"sentences": sentences, "actual": actual, "pred": pred_emotion})
accuracy
Positive / Negative
The AI predicts whether a sentence is positive or negative, then it is compared to the actual data.
Why RNN?
I will slap you.
Will I slap you?
Process Order
To CNN (Convolutional Neuural Network) and DNN (Deep Neurak Network), the sentences above are the same as these networks cannot detect the sequence of things.
RNN reinputs the output of a node in the hidden layer to another hidden layer node.
RNN
In traditional neural networks, inputs and outputs are treated independently. However, tasks like predicting the next word in a sentence require information from previous words to make accurate predictions. To address this limitation, Recurrent Neural Networks (RNNs) were developed.
RNN introduce a mechanism where the output from one step is fed back as input to the next, allowing them to retain information from previous inputs. This design makes RNNs well-suited for tasks where context from earlier steps is essential, such as predicting the next word in a sentence. This capability makes RNNs highly effective for sequential tasks.
Word Processing
Step 1
Where is the ice cream store?
Original
Where is the ice cream store
Remove punctuation
where is the ice cream store
Convert all to lowercase
where is ice cream store
Remove stop words
Step 2
Tokenization
{“where”: 1, “is”: 2, “ice”: 3, “cream": 4, “store": 5}
Step 3
Padding
Before
[0, 0, 0, 0, 0, 1, 2, 3, 4, 5]
After
[1, 2, 3, 4, 5, 0, 0, 0, 0, 0]
Step 4
RNN (Simple RNN, GRU, LSTM)
This one is LSTM
Step 5
NLP (RNN Transformers)